Jul 2 09:12:08.879211 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 09:12:08.879232 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 09:12:08.879242 kernel: KASLR enabled Jul 2 09:12:08.879248 kernel: efi: EFI v2.7 by EDK II Jul 2 09:12:08.879253 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 09:12:08.879259 kernel: random: crng init done Jul 2 09:12:08.879266 kernel: ACPI: Early table checksum verification disabled Jul 2 09:12:08.879271 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 09:12:08.879278 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 09:12:08.879285 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879291 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879297 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879303 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879309 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879316 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879324 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879330 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879337 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:12:08.879343 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 09:12:08.879349 kernel: NUMA: Failed to initialise from firmware Jul 2 09:12:08.879356 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:12:08.879362 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Jul 2 09:12:08.879368 kernel: Zone ranges: Jul 2 09:12:08.879374 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:12:08.879381 kernel: DMA32 empty Jul 2 09:12:08.879388 kernel: Normal empty Jul 2 09:12:08.879394 kernel: Movable zone start for each node Jul 2 09:12:08.879400 kernel: Early memory node ranges Jul 2 09:12:08.879407 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 09:12:08.879413 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 09:12:08.879419 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 09:12:08.879425 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 09:12:08.879431 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 09:12:08.879438 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 09:12:08.879444 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 09:12:08.879450 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:12:08.879456 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 09:12:08.879464 kernel: psci: probing for conduit method from ACPI. Jul 2 09:12:08.879470 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 09:12:08.879477 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 09:12:08.879485 kernel: psci: Trusted OS migration not required Jul 2 09:12:08.879492 kernel: psci: SMC Calling Convention v1.1 Jul 2 09:12:08.879499 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 09:12:08.879507 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 09:12:08.879514 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 09:12:08.879520 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 09:12:08.879527 kernel: Detected PIPT I-cache on CPU0 Jul 2 09:12:08.879534 kernel: CPU features: detected: GIC system register CPU interface Jul 2 09:12:08.879540 kernel: CPU features: detected: Hardware dirty bit management Jul 2 09:12:08.879547 kernel: CPU features: detected: Spectre-v4 Jul 2 09:12:08.879553 kernel: CPU features: detected: Spectre-BHB Jul 2 09:12:08.879560 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 09:12:08.879567 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 09:12:08.879575 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 09:12:08.879582 kernel: alternatives: applying boot alternatives Jul 2 09:12:08.879589 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:12:08.879596 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 09:12:08.879603 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 09:12:08.879610 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:12:08.879617 kernel: Fallback order for Node 0: 0 Jul 2 09:12:08.879638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 09:12:08.879645 kernel: Policy zone: DMA Jul 2 09:12:08.879652 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 09:12:08.879659 kernel: software IO TLB: area num 4. Jul 2 09:12:08.879667 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 09:12:08.879674 kernel: Memory: 2386844K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185444K reserved, 0K cma-reserved) Jul 2 09:12:08.879680 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 09:12:08.879688 kernel: trace event string verifier disabled Jul 2 09:12:08.879695 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 09:12:08.879702 kernel: rcu: RCU event tracing is enabled. Jul 2 09:12:08.879709 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 09:12:08.879716 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 09:12:08.879723 kernel: Tracing variant of Tasks RCU enabled. Jul 2 09:12:08.879729 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 09:12:08.879737 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 09:12:08.879744 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 09:12:08.879752 kernel: GICv3: 256 SPIs implemented Jul 2 09:12:08.879758 kernel: GICv3: 0 Extended SPIs implemented Jul 2 09:12:08.879765 kernel: Root IRQ handler: gic_handle_irq Jul 2 09:12:08.879772 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 09:12:08.879779 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 09:12:08.879785 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 09:12:08.879793 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 09:12:08.879800 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 09:12:08.879806 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 09:12:08.879813 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 09:12:08.879820 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 09:12:08.879828 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:12:08.879835 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 09:12:08.879842 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 09:12:08.879849 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 09:12:08.879855 kernel: arm-pv: using stolen time PV Jul 2 09:12:08.879862 kernel: Console: colour dummy device 80x25 Jul 2 09:12:08.879869 kernel: ACPI: Core revision 20230628 Jul 2 09:12:08.879877 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 09:12:08.879884 kernel: pid_max: default: 32768 minimum: 301 Jul 2 09:12:08.879891 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 09:12:08.879899 kernel: SELinux: Initializing. Jul 2 09:12:08.879906 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:12:08.879913 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:12:08.879919 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:12:08.879926 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:12:08.879934 kernel: rcu: Hierarchical SRCU implementation. Jul 2 09:12:08.879940 kernel: rcu: Max phase no-delay instances is 400. Jul 2 09:12:08.879947 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 09:12:08.879954 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 09:12:08.879969 kernel: Remapping and enabling EFI services. Jul 2 09:12:08.879977 kernel: smp: Bringing up secondary CPUs ... Jul 2 09:12:08.879984 kernel: Detected PIPT I-cache on CPU1 Jul 2 09:12:08.879991 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 09:12:08.879998 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 09:12:08.880004 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:12:08.880011 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 09:12:08.880018 kernel: Detected PIPT I-cache on CPU2 Jul 2 09:12:08.880025 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 09:12:08.880032 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 09:12:08.880040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:12:08.880047 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 09:12:08.880058 kernel: Detected PIPT I-cache on CPU3 Jul 2 09:12:08.880066 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 09:12:08.880074 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 09:12:08.880081 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:12:08.880088 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 09:12:08.880095 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 09:12:08.880102 kernel: SMP: Total of 4 processors activated. Jul 2 09:12:08.880185 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 09:12:08.880192 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 09:12:08.880199 kernel: CPU features: detected: Common not Private translations Jul 2 09:12:08.880207 kernel: CPU features: detected: CRC32 instructions Jul 2 09:12:08.880214 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 09:12:08.880221 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 09:12:08.880228 kernel: CPU features: detected: LSE atomic instructions Jul 2 09:12:08.880236 kernel: CPU features: detected: Privileged Access Never Jul 2 09:12:08.880245 kernel: CPU features: detected: RAS Extension Support Jul 2 09:12:08.880253 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 09:12:08.880260 kernel: CPU: All CPU(s) started at EL1 Jul 2 09:12:08.880267 kernel: alternatives: applying system-wide alternatives Jul 2 09:12:08.880274 kernel: devtmpfs: initialized Jul 2 09:12:08.880281 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 09:12:08.880289 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 09:12:08.880296 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 09:12:08.880303 kernel: SMBIOS 3.0.0 present. Jul 2 09:12:08.880312 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 09:12:08.880319 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 09:12:08.880326 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 09:12:08.880334 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 09:12:08.880341 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 09:12:08.880348 kernel: audit: initializing netlink subsys (disabled) Jul 2 09:12:08.880356 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 2 09:12:08.880363 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 09:12:08.880370 kernel: cpuidle: using governor menu Jul 2 09:12:08.880378 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 09:12:08.880386 kernel: ASID allocator initialised with 32768 entries Jul 2 09:12:08.880393 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 09:12:08.880400 kernel: Serial: AMBA PL011 UART driver Jul 2 09:12:08.880407 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 09:12:08.880414 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 09:12:08.880422 kernel: Modules: 509120 pages in range for PLT usage Jul 2 09:12:08.880429 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 09:12:08.880436 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 09:12:08.880444 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 09:12:08.880452 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 09:12:08.880459 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 09:12:08.880466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 09:12:08.880473 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 09:12:08.880480 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 09:12:08.880487 kernel: ACPI: Added _OSI(Module Device) Jul 2 09:12:08.880499 kernel: ACPI: Added _OSI(Processor Device) Jul 2 09:12:08.880506 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 09:12:08.880514 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 09:12:08.880522 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 09:12:08.880529 kernel: ACPI: Interpreter enabled Jul 2 09:12:08.880536 kernel: ACPI: Using GIC for interrupt routing Jul 2 09:12:08.880543 kernel: ACPI: MCFG table detected, 1 entries Jul 2 09:12:08.880550 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 09:12:08.880558 kernel: printk: console [ttyAMA0] enabled Jul 2 09:12:08.880565 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 09:12:08.880693 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 09:12:08.880766 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 09:12:08.880831 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 09:12:08.880893 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 09:12:08.880978 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 09:12:08.880988 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 09:12:08.880996 kernel: PCI host bridge to bus 0000:00 Jul 2 09:12:08.881065 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 09:12:08.881146 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 09:12:08.881206 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 09:12:08.881263 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 09:12:08.881341 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 09:12:08.881413 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 09:12:08.881479 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 09:12:08.881548 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 09:12:08.881613 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:12:08.881677 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:12:08.881742 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 09:12:08.881806 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 09:12:08.881863 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 09:12:08.881919 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 09:12:08.881985 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 09:12:08.881995 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 09:12:08.882003 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 09:12:08.882010 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 09:12:08.882017 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 09:12:08.882024 kernel: iommu: Default domain type: Translated Jul 2 09:12:08.882031 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 09:12:08.882039 kernel: efivars: Registered efivars operations Jul 2 09:12:08.882046 kernel: vgaarb: loaded Jul 2 09:12:08.882055 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 09:12:08.882063 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 09:12:08.882070 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 09:12:08.882077 kernel: pnp: PnP ACPI init Jul 2 09:12:08.882157 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 09:12:08.882168 kernel: pnp: PnP ACPI: found 1 devices Jul 2 09:12:08.882175 kernel: NET: Registered PF_INET protocol family Jul 2 09:12:08.882183 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:12:08.882192 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 09:12:08.882200 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 09:12:08.882207 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 09:12:08.882214 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 09:12:08.882221 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 09:12:08.882228 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:12:08.882236 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:12:08.882243 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 09:12:08.882250 kernel: PCI: CLS 0 bytes, default 64 Jul 2 09:12:08.882258 kernel: kvm [1]: HYP mode not available Jul 2 09:12:08.882266 kernel: Initialise system trusted keyrings Jul 2 09:12:08.882273 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 09:12:08.882280 kernel: Key type asymmetric registered Jul 2 09:12:08.882287 kernel: Asymmetric key parser 'x509' registered Jul 2 09:12:08.882294 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 09:12:08.882302 kernel: io scheduler mq-deadline registered Jul 2 09:12:08.882309 kernel: io scheduler kyber registered Jul 2 09:12:08.882316 kernel: io scheduler bfq registered Jul 2 09:12:08.882325 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 09:12:08.882332 kernel: ACPI: button: Power Button [PWRB] Jul 2 09:12:08.882340 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 09:12:08.882407 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 09:12:08.882417 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 09:12:08.882424 kernel: thunder_xcv, ver 1.0 Jul 2 09:12:08.882431 kernel: thunder_bgx, ver 1.0 Jul 2 09:12:08.882439 kernel: nicpf, ver 1.0 Jul 2 09:12:08.882446 kernel: nicvf, ver 1.0 Jul 2 09:12:08.882517 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 09:12:08.882582 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T09:12:08 UTC (1719911528) Jul 2 09:12:08.882592 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 09:12:08.882599 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 09:12:08.882606 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 09:12:08.882614 kernel: watchdog: Hard watchdog permanently disabled Jul 2 09:12:08.882621 kernel: NET: Registered PF_INET6 protocol family Jul 2 09:12:08.882628 kernel: Segment Routing with IPv6 Jul 2 09:12:08.882637 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 09:12:08.882644 kernel: NET: Registered PF_PACKET protocol family Jul 2 09:12:08.882652 kernel: Key type dns_resolver registered Jul 2 09:12:08.882659 kernel: registered taskstats version 1 Jul 2 09:12:08.882666 kernel: Loading compiled-in X.509 certificates Jul 2 09:12:08.882673 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 09:12:08.882681 kernel: Key type .fscrypt registered Jul 2 09:12:08.882688 kernel: Key type fscrypt-provisioning registered Jul 2 09:12:08.882695 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 09:12:08.882704 kernel: ima: Allocated hash algorithm: sha1 Jul 2 09:12:08.882711 kernel: ima: No architecture policies found Jul 2 09:12:08.882718 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 09:12:08.882725 kernel: clk: Disabling unused clocks Jul 2 09:12:08.882733 kernel: Freeing unused kernel memory: 39040K Jul 2 09:12:08.882740 kernel: Run /init as init process Jul 2 09:12:08.882747 kernel: with arguments: Jul 2 09:12:08.882754 kernel: /init Jul 2 09:12:08.882761 kernel: with environment: Jul 2 09:12:08.882769 kernel: HOME=/ Jul 2 09:12:08.882776 kernel: TERM=linux Jul 2 09:12:08.882783 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 09:12:08.882792 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:12:08.882801 systemd[1]: Detected virtualization kvm. Jul 2 09:12:08.882809 systemd[1]: Detected architecture arm64. Jul 2 09:12:08.882816 systemd[1]: Running in initrd. Jul 2 09:12:08.882823 systemd[1]: No hostname configured, using default hostname. Jul 2 09:12:08.882832 systemd[1]: Hostname set to . Jul 2 09:12:08.882840 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:12:08.882848 systemd[1]: Queued start job for default target initrd.target. Jul 2 09:12:08.882855 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:12:08.882863 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:12:08.882871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 09:12:08.882879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:12:08.882887 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 09:12:08.882897 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 09:12:08.882906 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 09:12:08.882914 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 09:12:08.882921 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:12:08.882929 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:12:08.882937 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:12:08.882944 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:12:08.882953 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:12:08.882968 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:12:08.882976 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:12:08.882984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:12:08.882992 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 09:12:08.883000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 09:12:08.883007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:12:08.883015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:12:08.883025 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:12:08.883032 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:12:08.883041 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 09:12:08.883048 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:12:08.883056 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 09:12:08.883064 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 09:12:08.883072 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:12:08.883079 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:12:08.883087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:12:08.883096 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 09:12:08.883104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:12:08.883120 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 09:12:08.883129 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:12:08.883139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:12:08.883164 systemd-journald[238]: Collecting audit messages is disabled. Jul 2 09:12:08.883183 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:12:08.883191 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:12:08.883201 systemd-journald[238]: Journal started Jul 2 09:12:08.883220 systemd-journald[238]: Runtime Journal (/run/log/journal/abc4bc1cd4f742c5bf1122023fca368a) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:12:08.874124 systemd-modules-load[239]: Inserted module 'overlay' Jul 2 09:12:08.884549 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:12:08.887783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:12:08.890221 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 09:12:08.890375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:12:08.893038 kernel: Bridge firewalling registered Jul 2 09:12:08.890708 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 2 09:12:08.892040 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:12:08.898266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:12:08.900212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:12:08.903092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:12:08.904495 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:12:08.907185 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 09:12:08.908883 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:12:08.910910 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:12:08.919954 dracut-cmdline[273]: dracut-dracut-053 Jul 2 09:12:08.922205 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:12:08.936624 systemd-resolved[276]: Positive Trust Anchors: Jul 2 09:12:08.936638 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:12:08.936671 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:12:08.941208 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 2 09:12:08.942515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:12:08.945176 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:12:08.987127 kernel: SCSI subsystem initialized Jul 2 09:12:08.992120 kernel: Loading iSCSI transport class v2.0-870. Jul 2 09:12:08.999122 kernel: iscsi: registered transport (tcp) Jul 2 09:12:09.012123 kernel: iscsi: registered transport (qla4xxx) Jul 2 09:12:09.012138 kernel: QLogic iSCSI HBA Driver Jul 2 09:12:09.049993 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 09:12:09.064268 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 09:12:09.082232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 09:12:09.082267 kernel: device-mapper: uevent: version 1.0.3 Jul 2 09:12:09.083134 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 09:12:09.128133 kernel: raid6: neonx8 gen() 15714 MB/s Jul 2 09:12:09.145118 kernel: raid6: neonx4 gen() 15615 MB/s Jul 2 09:12:09.162128 kernel: raid6: neonx2 gen() 13192 MB/s Jul 2 09:12:09.179118 kernel: raid6: neonx1 gen() 10445 MB/s Jul 2 09:12:09.196119 kernel: raid6: int64x8 gen() 6937 MB/s Jul 2 09:12:09.213119 kernel: raid6: int64x4 gen() 7289 MB/s Jul 2 09:12:09.230133 kernel: raid6: int64x2 gen() 6102 MB/s Jul 2 09:12:09.247118 kernel: raid6: int64x1 gen() 5037 MB/s Jul 2 09:12:09.247132 kernel: raid6: using algorithm neonx8 gen() 15714 MB/s Jul 2 09:12:09.264120 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Jul 2 09:12:09.264144 kernel: raid6: using neon recovery algorithm Jul 2 09:12:09.269211 kernel: xor: measuring software checksum speed Jul 2 09:12:09.269227 kernel: 8regs : 19773 MB/sec Jul 2 09:12:09.270118 kernel: 32regs : 19673 MB/sec Jul 2 09:12:09.271247 kernel: arm64_neon : 27215 MB/sec Jul 2 09:12:09.271270 kernel: xor: using function: arm64_neon (27215 MB/sec) Jul 2 09:12:09.323137 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 09:12:09.332514 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:12:09.343219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:12:09.353469 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jul 2 09:12:09.356520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:12:09.359391 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 09:12:09.372394 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 2 09:12:09.396508 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:12:09.407215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:12:09.444914 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:12:09.452240 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 09:12:09.462132 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 09:12:09.463634 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:12:09.464889 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:12:09.467234 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:12:09.475254 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 09:12:09.487144 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:12:09.494133 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 09:12:09.503628 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 09:12:09.503733 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 09:12:09.503745 kernel: GPT:9289727 != 19775487 Jul 2 09:12:09.503755 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 09:12:09.503764 kernel: GPT:9289727 != 19775487 Jul 2 09:12:09.503773 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 09:12:09.503782 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:12:09.495197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:12:09.495766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:12:09.497001 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:12:09.497851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:12:09.498044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:12:09.502353 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:12:09.513457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:12:09.523149 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Jul 2 09:12:09.525337 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (514) Jul 2 09:12:09.525846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:12:09.530577 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 09:12:09.537345 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 09:12:09.541429 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:12:09.544877 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 09:12:09.545800 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 09:12:09.557233 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 09:12:09.558657 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:12:09.562553 disk-uuid[549]: Primary Header is updated. Jul 2 09:12:09.562553 disk-uuid[549]: Secondary Entries is updated. Jul 2 09:12:09.562553 disk-uuid[549]: Secondary Header is updated. Jul 2 09:12:09.565128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:12:09.581597 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:12:10.578829 disk-uuid[550]: The operation has completed successfully. Jul 2 09:12:10.580049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:12:10.600303 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 09:12:10.600401 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 09:12:10.622359 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 09:12:10.625193 sh[574]: Success Jul 2 09:12:10.639343 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 09:12:10.663543 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 09:12:10.674450 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 09:12:10.675900 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 09:12:10.685899 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 09:12:10.685935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:12:10.685953 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 09:12:10.685969 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 09:12:10.686933 kernel: BTRFS info (device dm-0): using free space tree Jul 2 09:12:10.689884 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 09:12:10.690980 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 09:12:10.700296 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 09:12:10.701571 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 09:12:10.709470 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:12:10.709513 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:12:10.709524 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:12:10.712165 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:12:10.718926 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 09:12:10.720151 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:12:10.725844 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 09:12:10.733258 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 09:12:10.791384 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:12:10.805335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:12:10.824464 ignition[666]: Ignition 2.18.0 Jul 2 09:12:10.824475 ignition[666]: Stage: fetch-offline Jul 2 09:12:10.824507 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:12:10.824515 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:12:10.824602 ignition[666]: parsed url from cmdline: "" Jul 2 09:12:10.824605 ignition[666]: no config URL provided Jul 2 09:12:10.824610 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:12:10.824618 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:12:10.824644 ignition[666]: op(1): [started] loading QEMU firmware config module Jul 2 09:12:10.824649 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 09:12:10.830483 ignition[666]: op(1): [finished] loading QEMU firmware config module Jul 2 09:12:10.834728 systemd-networkd[767]: lo: Link UP Jul 2 09:12:10.834740 systemd-networkd[767]: lo: Gained carrier Jul 2 09:12:10.835413 systemd-networkd[767]: Enumeration completed Jul 2 09:12:10.835602 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:12:10.835808 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:12:10.835811 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:12:10.836531 systemd-networkd[767]: eth0: Link UP Jul 2 09:12:10.836535 systemd-networkd[767]: eth0: Gained carrier Jul 2 09:12:10.836542 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:12:10.836854 systemd[1]: Reached target network.target - Network. Jul 2 09:12:10.874154 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:12:10.878583 ignition[666]: parsing config with SHA512: 6f4f82cb27729efa6cc8ec68214a7ec8027508a498e886f97b3b9ae84814487796d18427fd909d0ae934d8cf94fddd8d254325cb98d9d4dde250e7a7213b76fc Jul 2 09:12:10.882850 unknown[666]: fetched base config from "system" Jul 2 09:12:10.882861 unknown[666]: fetched user config from "qemu" Jul 2 09:12:10.883639 ignition[666]: fetch-offline: fetch-offline passed Jul 2 09:12:10.884242 ignition[666]: Ignition finished successfully Jul 2 09:12:10.886580 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:12:10.889404 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 09:12:10.893241 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 09:12:10.904578 ignition[774]: Ignition 2.18.0 Jul 2 09:12:10.904587 ignition[774]: Stage: kargs Jul 2 09:12:10.904849 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:12:10.904860 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:12:10.905713 ignition[774]: kargs: kargs passed Jul 2 09:12:10.905756 ignition[774]: Ignition finished successfully Jul 2 09:12:10.908136 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 09:12:10.918245 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 09:12:10.927898 ignition[783]: Ignition 2.18.0 Jul 2 09:12:10.927908 ignition[783]: Stage: disks Jul 2 09:12:10.928060 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:12:10.928069 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:12:10.930480 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 09:12:10.928897 ignition[783]: disks: disks passed Jul 2 09:12:10.931765 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 09:12:10.928940 ignition[783]: Ignition finished successfully Jul 2 09:12:10.933133 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 09:12:10.934413 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:12:10.935857 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:12:10.936994 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:12:10.945247 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 09:12:10.957238 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 09:12:10.960903 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 09:12:10.968259 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 09:12:11.013853 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 09:12:11.015073 kernel: EXT4-fs (vda9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 09:12:11.014884 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 09:12:11.028188 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:12:11.029663 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 09:12:11.030720 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 09:12:11.030759 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 09:12:11.038500 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Jul 2 09:12:11.038524 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:12:11.038534 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:12:11.038545 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:12:11.030782 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:12:11.036564 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 09:12:11.039994 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 09:12:11.043134 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:12:11.044486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:12:11.086520 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 09:12:11.089626 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Jul 2 09:12:11.092918 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 09:12:11.096801 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 09:12:11.164390 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 09:12:11.175240 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 09:12:11.177432 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 09:12:11.182132 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:12:11.197813 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 09:12:11.199369 ignition[915]: INFO : Ignition 2.18.0 Jul 2 09:12:11.199369 ignition[915]: INFO : Stage: mount Jul 2 09:12:11.199369 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:12:11.202448 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:12:11.202448 ignition[915]: INFO : mount: mount passed Jul 2 09:12:11.202448 ignition[915]: INFO : Ignition finished successfully Jul 2 09:12:11.202365 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 09:12:11.209261 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 09:12:11.685696 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 09:12:11.695268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:12:11.700134 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Jul 2 09:12:11.702236 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:12:11.702286 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:12:11.702299 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:12:11.704129 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:12:11.705208 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:12:11.720293 ignition[947]: INFO : Ignition 2.18.0 Jul 2 09:12:11.720293 ignition[947]: INFO : Stage: files Jul 2 09:12:11.721779 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:12:11.721779 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:12:11.721779 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jul 2 09:12:11.725012 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 09:12:11.725012 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 09:12:11.725012 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 09:12:11.725012 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 09:12:11.725012 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 09:12:11.724394 unknown[947]: wrote ssh authorized keys file for user: core Jul 2 09:12:11.731895 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:12:11.731895 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 09:12:11.997269 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 09:12:12.004470 systemd-networkd[767]: eth0: Gained IPv6LL Jul 2 09:12:12.044206 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:12:12.044206 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 09:12:12.047442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 09:12:12.353401 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 09:12:12.582357 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 09:12:12.582357 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 09:12:12.585576 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:12:12.585576 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:12:12.585576 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 09:12:12.585576 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 09:12:12.585576 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:12:12.585576 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:12:12.585576 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 09:12:12.585576 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 09:12:12.603621 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:12:12.606849 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:12:12.609205 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 09:12:12.609205 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 09:12:12.609205 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 09:12:12.609205 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:12:12.609205 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:12:12.609205 ignition[947]: INFO : files: files passed Jul 2 09:12:12.609205 ignition[947]: INFO : Ignition finished successfully Jul 2 09:12:12.609591 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 09:12:12.622291 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 09:12:12.623875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 09:12:12.626340 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 09:12:12.626441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 09:12:12.631028 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 09:12:12.633141 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:12:12.633141 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:12:12.635827 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:12:12.634695 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:12:12.637068 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 09:12:12.650337 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 09:12:12.667729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 09:12:12.667828 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 09:12:12.669529 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 09:12:12.670834 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 09:12:12.672175 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 09:12:12.672856 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 09:12:12.686890 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:12:12.699290 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 09:12:12.706435 systemd[1]: Stopped target network.target - Network. Jul 2 09:12:12.707270 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:12:12.708595 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:12:12.710168 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 09:12:12.711452 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 09:12:12.711562 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:12:12.713369 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 09:12:12.714848 systemd[1]: Stopped target basic.target - Basic System. Jul 2 09:12:12.716145 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 09:12:12.717358 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:12:12.718809 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 09:12:12.720221 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 09:12:12.721654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:12:12.723070 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 09:12:12.724533 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 09:12:12.725814 systemd[1]: Stopped target swap.target - Swaps. Jul 2 09:12:12.726967 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 09:12:12.727088 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:12:12.728887 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:12:12.730380 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:12:12.731798 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 09:12:12.733173 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:12:12.734947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 09:12:12.735069 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 09:12:12.737029 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 09:12:12.737243 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:12:12.738736 systemd[1]: Stopped target paths.target - Path Units. Jul 2 09:12:12.739895 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 09:12:12.745174 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:12:12.746213 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 09:12:12.747887 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 09:12:12.749157 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 09:12:12.749245 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:12:12.750331 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 09:12:12.750408 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:12:12.751557 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 09:12:12.751666 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:12:12.752953 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 09:12:12.753072 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 09:12:12.763275 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 09:12:12.764000 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 09:12:12.764149 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:12:12.766339 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 09:12:12.767715 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 09:12:12.769004 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 09:12:12.770298 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 09:12:12.770427 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:12:12.774382 ignition[1002]: INFO : Ignition 2.18.0 Jul 2 09:12:12.774382 ignition[1002]: INFO : Stage: umount Jul 2 09:12:12.772163 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 09:12:12.776238 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:12:12.776238 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:12:12.772268 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:12:12.778592 ignition[1002]: INFO : umount: umount passed Jul 2 09:12:12.778592 ignition[1002]: INFO : Ignition finished successfully Jul 2 09:12:12.775191 systemd-networkd[767]: eth0: DHCPv6 lease lost Jul 2 09:12:12.777458 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 09:12:12.778654 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 09:12:12.780299 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 09:12:12.780395 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 09:12:12.783012 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 09:12:12.783168 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 09:12:12.786072 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 09:12:12.786524 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 09:12:12.786610 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 09:12:12.789306 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 09:12:12.789344 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:12:12.790494 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 09:12:12.790543 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 09:12:12.791785 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 09:12:12.791827 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 09:12:12.793037 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 09:12:12.793078 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 09:12:12.794621 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 09:12:12.794666 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 09:12:12.802217 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 09:12:12.802967 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 09:12:12.803041 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:12:12.804636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:12:12.804681 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:12:12.806162 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 09:12:12.806207 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 09:12:12.807725 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 09:12:12.807766 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:12:12.809312 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:12:12.817841 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 09:12:12.817940 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 09:12:12.828785 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 09:12:12.828920 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:12:12.830828 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 09:12:12.830868 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 09:12:12.832263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 09:12:12.832293 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:12:12.833669 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 09:12:12.833718 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:12:12.835761 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 09:12:12.835804 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 09:12:12.837439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:12:12.837485 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:12:12.848288 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 09:12:12.849084 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 09:12:12.849156 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:12:12.850854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:12:12.850896 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:12:12.852486 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 09:12:12.852564 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 09:12:12.853861 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 09:12:12.853931 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 09:12:12.855844 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 09:12:12.856689 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 09:12:12.856749 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 09:12:12.858727 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 09:12:12.867412 systemd[1]: Switching root. Jul 2 09:12:12.891931 systemd-journald[238]: Journal stopped Jul 2 09:12:13.557154 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 2 09:12:13.557206 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 09:12:13.557218 kernel: SELinux: policy capability open_perms=1 Jul 2 09:12:13.557228 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 09:12:13.557237 kernel: SELinux: policy capability always_check_network=0 Jul 2 09:12:13.557247 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 09:12:13.557260 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 09:12:13.557270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 09:12:13.557280 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 09:12:13.557289 kernel: audit: type=1403 audit(1719911533.028:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 09:12:13.557300 systemd[1]: Successfully loaded SELinux policy in 31.272ms. Jul 2 09:12:13.557319 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.925ms. Jul 2 09:12:13.557333 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:12:13.557344 systemd[1]: Detected virtualization kvm. Jul 2 09:12:13.557355 systemd[1]: Detected architecture arm64. Jul 2 09:12:13.557366 systemd[1]: Detected first boot. Jul 2 09:12:13.557377 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:12:13.557387 zram_generator::config[1046]: No configuration found. Jul 2 09:12:13.557397 systemd[1]: Populated /etc with preset unit settings. Jul 2 09:12:13.557407 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 09:12:13.557418 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 09:12:13.557428 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 09:12:13.557440 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 09:12:13.557452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 09:12:13.557465 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 09:12:13.557477 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 09:12:13.557488 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 09:12:13.557498 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 09:12:13.557509 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 09:12:13.557519 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 09:12:13.557530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:12:13.557540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:12:13.557553 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 09:12:13.557565 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 09:12:13.557575 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 09:12:13.557586 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:12:13.557596 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 09:12:13.557607 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:12:13.557617 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 09:12:13.557627 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 09:12:13.557639 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 09:12:13.557650 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 09:12:13.557660 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:12:13.557671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:12:13.557681 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:12:13.557697 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:12:13.557707 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 09:12:13.557717 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 09:12:13.557730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:12:13.557740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:12:13.557751 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:12:13.557761 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 09:12:13.557772 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 09:12:13.557782 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 09:12:13.557792 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 09:12:13.557802 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 09:12:13.557812 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 09:12:13.557825 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 09:12:13.557835 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 09:12:13.557846 systemd[1]: Reached target machines.target - Containers. Jul 2 09:12:13.557857 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 09:12:13.557867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:12:13.557877 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:12:13.557888 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 09:12:13.557899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:12:13.557910 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:12:13.557922 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:12:13.557932 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 09:12:13.557942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:12:13.557953 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 09:12:13.557963 kernel: fuse: init (API version 7.39) Jul 2 09:12:13.557979 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 09:12:13.557990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 09:12:13.558001 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 09:12:13.558013 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 09:12:13.558023 kernel: loop: module loaded Jul 2 09:12:13.558033 kernel: ACPI: bus type drm_connector registered Jul 2 09:12:13.558042 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:12:13.558052 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:12:13.558063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 09:12:13.558073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 09:12:13.558083 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:12:13.558093 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 09:12:13.558133 systemd[1]: Stopped verity-setup.service. Jul 2 09:12:13.558146 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 09:12:13.558175 systemd-journald[1116]: Collecting audit messages is disabled. Jul 2 09:12:13.558197 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 09:12:13.558207 systemd-journald[1116]: Journal started Jul 2 09:12:13.558227 systemd-journald[1116]: Runtime Journal (/run/log/journal/abc4bc1cd4f742c5bf1122023fca368a) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:12:13.374396 systemd[1]: Queued start job for default target multi-user.target. Jul 2 09:12:13.393561 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 09:12:13.393899 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 09:12:13.560144 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:12:13.560635 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 09:12:13.561523 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 09:12:13.562513 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 09:12:13.563581 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 09:12:13.565191 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 09:12:13.566446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:12:13.567615 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 09:12:13.567746 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 09:12:13.568995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:12:13.569152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:12:13.570326 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:12:13.570460 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:12:13.573399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:12:13.573531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:12:13.574757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 09:12:13.575053 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 09:12:13.576291 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:12:13.576428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:12:13.577446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:12:13.578753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 09:12:13.580049 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 09:12:13.592418 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 09:12:13.602183 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 09:12:13.604061 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 09:12:13.604912 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 09:12:13.604939 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:12:13.606744 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 09:12:13.608619 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 09:12:13.610419 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 09:12:13.611253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:12:13.612566 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 09:12:13.614206 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 09:12:13.615022 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:12:13.617299 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 09:12:13.618183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:12:13.620393 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:12:13.626556 systemd-journald[1116]: Time spent on flushing to /var/log/journal/abc4bc1cd4f742c5bf1122023fca368a is 22.252ms for 851 entries. Jul 2 09:12:13.626556 systemd-journald[1116]: System Journal (/var/log/journal/abc4bc1cd4f742c5bf1122023fca368a) is 8.0M, max 195.6M, 187.6M free. Jul 2 09:12:13.660901 systemd-journald[1116]: Received client request to flush runtime journal. Jul 2 09:12:13.660949 kernel: loop0: detected capacity change from 0 to 193208 Jul 2 09:12:13.626272 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 09:12:13.628753 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 09:12:13.631064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:12:13.632381 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 09:12:13.633368 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 09:12:13.634439 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 09:12:13.653262 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 09:12:13.654405 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 09:12:13.656036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:12:13.658049 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 09:12:13.661282 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 09:12:13.663152 kernel: block loop0: the capability attribute has been deprecated. Jul 2 09:12:13.673409 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 09:12:13.675133 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 09:12:13.677791 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 09:12:13.687192 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 09:12:13.690144 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 09:12:13.692237 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 09:12:13.698284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:12:13.705144 kernel: loop1: detected capacity change from 0 to 113672 Jul 2 09:12:13.712805 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jul 2 09:12:13.712825 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jul 2 09:12:13.716591 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:12:13.730164 kernel: loop2: detected capacity change from 0 to 59672 Jul 2 09:12:13.761164 kernel: loop3: detected capacity change from 0 to 193208 Jul 2 09:12:13.767136 kernel: loop4: detected capacity change from 0 to 113672 Jul 2 09:12:13.771156 kernel: loop5: detected capacity change from 0 to 59672 Jul 2 09:12:13.774072 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 09:12:13.775249 (sd-merge)[1182]: Merged extensions into '/usr'. Jul 2 09:12:13.780033 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 09:12:13.780048 systemd[1]: Reloading... Jul 2 09:12:13.818774 zram_generator::config[1203]: No configuration found. Jul 2 09:12:13.883199 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 09:12:13.920365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:12:13.961615 systemd[1]: Reloading finished in 181 ms. Jul 2 09:12:13.990446 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 09:12:13.991594 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 09:12:14.003449 systemd[1]: Starting ensure-sysext.service... Jul 2 09:12:14.005013 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:12:14.015932 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jul 2 09:12:14.015947 systemd[1]: Reloading... Jul 2 09:12:14.026844 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 09:12:14.027117 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 09:12:14.027742 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 09:12:14.027957 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jul 2 09:12:14.028012 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jul 2 09:12:14.030611 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:12:14.030731 systemd-tmpfiles[1241]: Skipping /boot Jul 2 09:12:14.039433 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:12:14.040743 systemd-tmpfiles[1241]: Skipping /boot Jul 2 09:12:14.055134 zram_generator::config[1267]: No configuration found. Jul 2 09:12:14.142167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:12:14.183539 systemd[1]: Reloading finished in 167 ms. Jul 2 09:12:14.198138 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 09:12:14.205587 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:12:14.212631 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:12:14.214843 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 09:12:14.216919 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 09:12:14.222351 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:12:14.226437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:12:14.229382 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 09:12:14.235025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:12:14.236576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:12:14.240134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:12:14.243747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:12:14.244617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:12:14.251682 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 09:12:14.253622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:12:14.254348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:12:14.256384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:12:14.257195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:12:14.259598 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:12:14.259722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:12:14.268319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:12:14.273100 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Jul 2 09:12:14.279363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:12:14.281891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:12:14.286371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:12:14.288250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:12:14.288922 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 09:12:14.290422 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 09:12:14.296914 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 09:12:14.298759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:12:14.300873 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 09:12:14.302666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:12:14.304354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:12:14.306672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:12:14.306795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:12:14.309893 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:12:14.310030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:12:14.311893 augenrules[1350]: No rules Jul 2 09:12:14.319151 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:12:14.329081 systemd[1]: Finished ensure-sysext.service. Jul 2 09:12:14.336137 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1363) Jul 2 09:12:14.337090 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 09:12:14.337488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:12:14.342299 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:12:14.347304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:12:14.348129 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1343) Jul 2 09:12:14.349893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:12:14.355274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:12:14.356189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:12:14.360674 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:12:14.362040 systemd-resolved[1307]: Positive Trust Anchors: Jul 2 09:12:14.362057 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:12:14.362087 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:12:14.364406 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 09:12:14.367251 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 09:12:14.368173 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:12:14.368671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:12:14.370003 systemd-resolved[1307]: Defaulting to hostname 'linux'. Jul 2 09:12:14.370416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:12:14.371589 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:12:14.371745 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:12:14.373023 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:12:14.374385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:12:14.374507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:12:14.376043 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:12:14.376173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:12:14.388129 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 09:12:14.406659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:12:14.408304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:12:14.429328 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 09:12:14.430535 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:12:14.430602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:12:14.450665 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 09:12:14.451783 systemd-networkd[1375]: lo: Link UP Jul 2 09:12:14.451791 systemd-networkd[1375]: lo: Gained carrier Jul 2 09:12:14.451942 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 09:12:14.452496 systemd-networkd[1375]: Enumeration completed Jul 2 09:12:14.452841 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:12:14.453581 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:12:14.453591 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:12:14.454217 systemd-networkd[1375]: eth0: Link UP Jul 2 09:12:14.454227 systemd-networkd[1375]: eth0: Gained carrier Jul 2 09:12:14.454239 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:12:14.457157 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 09:12:14.461032 systemd[1]: Reached target network.target - Network. Jul 2 09:12:14.471158 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:12:14.471299 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 09:12:14.473035 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Jul 2 09:12:14.473387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:12:14.055279 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 09:12:14.060537 systemd-journald[1116]: Time jumped backwards, rotating. Jul 2 09:12:14.055334 systemd-timesyncd[1376]: Initial clock synchronization to Tue 2024-07-02 09:12:14.055183 UTC. Jul 2 09:12:14.055993 systemd-resolved[1307]: Clock change detected. Flushing caches. Jul 2 09:12:14.060782 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 09:12:14.063104 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 09:12:14.091055 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:12:14.099019 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:12:14.121464 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 09:12:14.122586 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:12:14.123551 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:12:14.124440 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 09:12:14.125337 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 09:12:14.126398 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 09:12:14.127402 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 09:12:14.128310 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 09:12:14.129309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 09:12:14.129345 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:12:14.130002 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:12:14.131381 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 09:12:14.133514 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 09:12:14.141882 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 09:12:14.143761 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 09:12:14.144987 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 09:12:14.145873 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:12:14.146602 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:12:14.147303 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:12:14.147332 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:12:14.148184 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 09:12:14.149854 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 09:12:14.151080 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:12:14.154154 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 09:12:14.156431 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 09:12:14.158260 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 09:12:14.159197 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 09:12:14.161765 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 09:12:14.164718 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 09:12:14.170877 jq[1411]: false Jul 2 09:12:14.166664 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 09:12:14.171729 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 09:12:14.177828 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 09:12:14.178257 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 09:12:14.179326 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 09:12:14.180093 extend-filesystems[1412]: Found loop3 Jul 2 09:12:14.181140 extend-filesystems[1412]: Found loop4 Jul 2 09:12:14.185121 extend-filesystems[1412]: Found loop5 Jul 2 09:12:14.185121 extend-filesystems[1412]: Found vda Jul 2 09:12:14.185121 extend-filesystems[1412]: Found vda1 Jul 2 09:12:14.185121 extend-filesystems[1412]: Found vda2 Jul 2 09:12:14.185121 extend-filesystems[1412]: Found vda3 Jul 2 09:12:14.185121 extend-filesystems[1412]: Found usr Jul 2 09:12:14.185121 extend-filesystems[1412]: Found vda4 Jul 2 09:12:14.185121 extend-filesystems[1412]: Found vda6 Jul 2 09:12:14.185121 extend-filesystems[1412]: Found vda7 Jul 2 09:12:14.185121 extend-filesystems[1412]: Found vda9 Jul 2 09:12:14.185121 extend-filesystems[1412]: Checking size of /dev/vda9 Jul 2 09:12:14.181722 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 09:12:14.195898 dbus-daemon[1410]: [system] SELinux support is enabled Jul 2 09:12:14.183789 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 09:12:14.199802 jq[1424]: true Jul 2 09:12:14.187076 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 09:12:14.187257 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 09:12:14.188176 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 09:12:14.188313 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 09:12:14.198356 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 09:12:14.203196 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 09:12:14.205123 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 09:12:14.209259 extend-filesystems[1412]: Resized partition /dev/vda9 Jul 2 09:12:14.219237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1352) Jul 2 09:12:14.222618 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 09:12:14.222836 tar[1430]: linux-arm64/helm Jul 2 09:12:14.231753 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 09:12:14.237490 extend-filesystems[1439]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 09:12:14.241294 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 09:12:14.231796 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 09:12:14.234120 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 09:12:14.234137 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 09:12:14.251021 jq[1438]: true Jul 2 09:12:14.260891 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 09:12:14.261433 systemd-logind[1419]: New seat seat0. Jul 2 09:12:14.262070 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 09:12:14.263363 update_engine[1421]: I0702 09:12:14.263149 1421 main.cc:92] Flatcar Update Engine starting Jul 2 09:12:14.272934 systemd[1]: Started update-engine.service - Update Engine. Jul 2 09:12:14.276883 update_engine[1421]: I0702 09:12:14.276739 1421 update_check_scheduler.cc:74] Next update check in 6m59s Jul 2 09:12:14.279103 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 09:12:14.281295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 09:12:14.306572 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 09:12:14.306572 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 09:12:14.306572 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 09:12:14.313487 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Jul 2 09:12:14.307305 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 09:12:14.307471 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 09:12:14.316391 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Jul 2 09:12:14.318001 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 09:12:14.319682 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 09:12:14.344294 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 09:12:14.425292 containerd[1440]: time="2024-07-02T09:12:14.424886136Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 09:12:14.452275 containerd[1440]: time="2024-07-02T09:12:14.452232656Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 09:12:14.452433 containerd[1440]: time="2024-07-02T09:12:14.452275216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:12:14.453866 containerd[1440]: time="2024-07-02T09:12:14.453831576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.453980336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454203416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454220816Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454289256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454332576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454344296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454401696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454569216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454585576Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454595896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:12:14.454816 containerd[1440]: time="2024-07-02T09:12:14.454679056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:12:14.455576 containerd[1440]: time="2024-07-02T09:12:14.454691616Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 09:12:14.455576 containerd[1440]: time="2024-07-02T09:12:14.454736656Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 09:12:14.455576 containerd[1440]: time="2024-07-02T09:12:14.454746496Z" level=info msg="metadata content store policy set" policy=shared Jul 2 09:12:14.457660 containerd[1440]: time="2024-07-02T09:12:14.457631016Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 09:12:14.457721 containerd[1440]: time="2024-07-02T09:12:14.457669136Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 09:12:14.457721 containerd[1440]: time="2024-07-02T09:12:14.457681776Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 09:12:14.457757 containerd[1440]: time="2024-07-02T09:12:14.457713216Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 09:12:14.457757 containerd[1440]: time="2024-07-02T09:12:14.457736056Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 09:12:14.457757 containerd[1440]: time="2024-07-02T09:12:14.457746336Z" level=info msg="NRI interface is disabled by configuration." Jul 2 09:12:14.457811 containerd[1440]: time="2024-07-02T09:12:14.457757376Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 09:12:14.457890 containerd[1440]: time="2024-07-02T09:12:14.457867816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 09:12:14.457929 containerd[1440]: time="2024-07-02T09:12:14.457890496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 09:12:14.457929 containerd[1440]: time="2024-07-02T09:12:14.457912096Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 09:12:14.457929 containerd[1440]: time="2024-07-02T09:12:14.457926256Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 09:12:14.457981 containerd[1440]: time="2024-07-02T09:12:14.457939696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 09:12:14.457981 containerd[1440]: time="2024-07-02T09:12:14.457957896Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 09:12:14.457981 containerd[1440]: time="2024-07-02T09:12:14.457976176Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 09:12:14.458030 containerd[1440]: time="2024-07-02T09:12:14.457988496Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 09:12:14.458030 containerd[1440]: time="2024-07-02T09:12:14.458001416Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 09:12:14.458030 containerd[1440]: time="2024-07-02T09:12:14.458013616Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 09:12:14.458030 containerd[1440]: time="2024-07-02T09:12:14.458024536Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 09:12:14.458114 containerd[1440]: time="2024-07-02T09:12:14.458056096Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 09:12:14.458254 containerd[1440]: time="2024-07-02T09:12:14.458153136Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 09:12:14.458375 containerd[1440]: time="2024-07-02T09:12:14.458352736Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 09:12:14.458407 containerd[1440]: time="2024-07-02T09:12:14.458380096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458407 containerd[1440]: time="2024-07-02T09:12:14.458395376Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 09:12:14.458449 containerd[1440]: time="2024-07-02T09:12:14.458416536Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 09:12:14.458539 containerd[1440]: time="2024-07-02T09:12:14.458527576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458573 containerd[1440]: time="2024-07-02T09:12:14.458541496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458573 containerd[1440]: time="2024-07-02T09:12:14.458552896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458573 containerd[1440]: time="2024-07-02T09:12:14.458563576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458627 containerd[1440]: time="2024-07-02T09:12:14.458575256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458627 containerd[1440]: time="2024-07-02T09:12:14.458587056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458627 containerd[1440]: time="2024-07-02T09:12:14.458598096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458627 containerd[1440]: time="2024-07-02T09:12:14.458609256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458627 containerd[1440]: time="2024-07-02T09:12:14.458621336Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 09:12:14.458933 containerd[1440]: time="2024-07-02T09:12:14.458745896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458933 containerd[1440]: time="2024-07-02T09:12:14.458768496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458933 containerd[1440]: time="2024-07-02T09:12:14.458785976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458933 containerd[1440]: time="2024-07-02T09:12:14.458799056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458933 containerd[1440]: time="2024-07-02T09:12:14.458810136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458933 containerd[1440]: time="2024-07-02T09:12:14.458823896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458933 containerd[1440]: time="2024-07-02T09:12:14.458835616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.458933 containerd[1440]: time="2024-07-02T09:12:14.458845976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 09:12:14.459223 containerd[1440]: time="2024-07-02T09:12:14.459162856Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 09:12:14.459223 containerd[1440]: time="2024-07-02T09:12:14.459222376Z" level=info msg="Connect containerd service" Jul 2 09:12:14.459354 containerd[1440]: time="2024-07-02T09:12:14.459246856Z" level=info msg="using legacy CRI server" Jul 2 09:12:14.459354 containerd[1440]: time="2024-07-02T09:12:14.459253816Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 09:12:14.459460 containerd[1440]: time="2024-07-02T09:12:14.459388016Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 09:12:14.460015 containerd[1440]: time="2024-07-02T09:12:14.459988936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:12:14.460073 containerd[1440]: time="2024-07-02T09:12:14.460055656Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 09:12:14.460554 containerd[1440]: time="2024-07-02T09:12:14.460339336Z" level=info msg="Start subscribing containerd event" Jul 2 09:12:14.460714 containerd[1440]: time="2024-07-02T09:12:14.460695456Z" level=info msg="Start recovering state" Jul 2 09:12:14.460877 containerd[1440]: time="2024-07-02T09:12:14.460860136Z" level=info msg="Start event monitor" Jul 2 09:12:14.460995 containerd[1440]: time="2024-07-02T09:12:14.460979776Z" level=info msg="Start snapshots syncer" Jul 2 09:12:14.461634 containerd[1440]: time="2024-07-02T09:12:14.461058696Z" level=info msg="Start cni network conf syncer for default" Jul 2 09:12:14.461634 containerd[1440]: time="2024-07-02T09:12:14.461074856Z" level=info msg="Start streaming server" Jul 2 09:12:14.462215 containerd[1440]: time="2024-07-02T09:12:14.462191056Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 09:12:14.462215 containerd[1440]: time="2024-07-02T09:12:14.462208336Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 09:12:14.462293 containerd[1440]: time="2024-07-02T09:12:14.462220176Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 09:12:14.463069 containerd[1440]: time="2024-07-02T09:12:14.462415016Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 09:12:14.463069 containerd[1440]: time="2024-07-02T09:12:14.462459456Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 09:12:14.463069 containerd[1440]: time="2024-07-02T09:12:14.462505136Z" level=info msg="containerd successfully booted in 0.038779s" Jul 2 09:12:14.462590 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 09:12:14.614736 tar[1430]: linux-arm64/LICENSE Jul 2 09:12:14.614736 tar[1430]: linux-arm64/README.md Jul 2 09:12:14.626226 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 09:12:15.196467 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 09:12:15.214633 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 09:12:15.231635 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 09:12:15.236340 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 09:12:15.236524 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 09:12:15.239442 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 09:12:15.251364 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 09:12:15.254354 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 09:12:15.256573 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 09:12:15.257952 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 09:12:15.425151 systemd-networkd[1375]: eth0: Gained IPv6LL Jul 2 09:12:15.428190 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 09:12:15.430119 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 09:12:15.440441 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 09:12:15.442609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:12:15.444472 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 09:12:15.458585 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 09:12:15.459436 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 09:12:15.460648 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 09:12:15.463709 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 09:12:15.911489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:12:15.912804 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 09:12:15.914444 systemd[1]: Startup finished in 531ms (kernel) + 4.330s (initrd) + 3.338s (userspace) = 8.200s. Jul 2 09:12:15.915567 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:12:16.366726 kubelet[1523]: E0702 09:12:16.366599 1523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:12:16.369654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:12:16.369798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:12:21.455645 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 09:12:21.456737 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:53124.service - OpenSSH per-connection server daemon (10.0.0.1:53124). Jul 2 09:12:21.509216 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 53124 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:12:21.510815 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:12:21.521103 systemd-logind[1419]: New session 1 of user core. Jul 2 09:12:21.522051 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 09:12:21.534366 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 09:12:21.542918 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 09:12:21.544960 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 09:12:21.550916 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:12:21.626991 systemd[1541]: Queued start job for default target default.target. Jul 2 09:12:21.634908 systemd[1541]: Created slice app.slice - User Application Slice. Jul 2 09:12:21.634943 systemd[1541]: Reached target paths.target - Paths. Jul 2 09:12:21.634956 systemd[1541]: Reached target timers.target - Timers. Jul 2 09:12:21.636011 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 09:12:21.644741 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 09:12:21.644799 systemd[1541]: Reached target sockets.target - Sockets. Jul 2 09:12:21.644811 systemd[1541]: Reached target basic.target - Basic System. Jul 2 09:12:21.644843 systemd[1541]: Reached target default.target - Main User Target. Jul 2 09:12:21.644868 systemd[1541]: Startup finished in 88ms. Jul 2 09:12:21.645142 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 09:12:21.646340 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 09:12:21.709204 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:53140.service - OpenSSH per-connection server daemon (10.0.0.1:53140). Jul 2 09:12:21.740659 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 53140 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:12:21.741641 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:12:21.745873 systemd-logind[1419]: New session 2 of user core. Jul 2 09:12:21.757227 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 09:12:21.811152 sshd[1552]: pam_unix(sshd:session): session closed for user core Jul 2 09:12:21.818270 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:53140.service: Deactivated successfully. Jul 2 09:12:21.820397 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 09:12:21.823756 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Jul 2 09:12:21.831220 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:53142.service - OpenSSH per-connection server daemon (10.0.0.1:53142). Jul 2 09:12:21.832962 systemd-logind[1419]: Removed session 2. Jul 2 09:12:21.863609 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 53142 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:12:21.864811 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:12:21.871922 systemd-logind[1419]: New session 3 of user core. Jul 2 09:12:21.881179 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 09:12:21.929411 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 2 09:12:21.941145 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:53142.service: Deactivated successfully. Jul 2 09:12:21.943305 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 09:12:21.944598 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Jul 2 09:12:21.965512 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:53148.service - OpenSSH per-connection server daemon (10.0.0.1:53148). Jul 2 09:12:21.966683 systemd-logind[1419]: Removed session 3. Jul 2 09:12:21.996124 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 53148 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:12:21.997204 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:12:22.003641 systemd-logind[1419]: New session 4 of user core. Jul 2 09:12:22.016201 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 09:12:22.067834 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 2 09:12:22.082239 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:53148.service: Deactivated successfully. Jul 2 09:12:22.083627 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 09:12:22.086191 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Jul 2 09:12:22.087804 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:53158.service - OpenSSH per-connection server daemon (10.0.0.1:53158). Jul 2 09:12:22.088623 systemd-logind[1419]: Removed session 4. Jul 2 09:12:22.122104 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 53158 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:12:22.123479 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:12:22.128388 systemd-logind[1419]: New session 5 of user core. Jul 2 09:12:22.137169 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 09:12:22.197499 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 09:12:22.197732 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:12:22.212353 sudo[1576]: pam_unix(sudo:session): session closed for user root Jul 2 09:12:22.214000 sshd[1573]: pam_unix(sshd:session): session closed for user core Jul 2 09:12:22.227816 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:53158.service: Deactivated successfully. Jul 2 09:12:22.229156 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 09:12:22.231875 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Jul 2 09:12:22.233145 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:53170.service - OpenSSH per-connection server daemon (10.0.0.1:53170). Jul 2 09:12:22.234484 systemd-logind[1419]: Removed session 5. Jul 2 09:12:22.270510 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 53170 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:12:22.271640 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:12:22.277231 systemd-logind[1419]: New session 6 of user core. Jul 2 09:12:22.293184 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 09:12:22.344056 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 09:12:22.344286 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:12:22.346982 sudo[1585]: pam_unix(sudo:session): session closed for user root Jul 2 09:12:22.351235 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 09:12:22.351446 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:12:22.366299 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 09:12:22.367365 auditctl[1588]: No rules Jul 2 09:12:22.368126 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 09:12:22.370084 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 09:12:22.371714 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:12:22.396251 augenrules[1606]: No rules Jul 2 09:12:22.397246 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:12:22.398589 sudo[1584]: pam_unix(sudo:session): session closed for user root Jul 2 09:12:22.400155 sshd[1581]: pam_unix(sshd:session): session closed for user core Jul 2 09:12:22.410553 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:53170.service: Deactivated successfully. Jul 2 09:12:22.411919 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 09:12:22.415226 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Jul 2 09:12:22.422168 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:53172.service - OpenSSH per-connection server daemon (10.0.0.1:53172). Jul 2 09:12:22.425069 systemd-logind[1419]: Removed session 6. Jul 2 09:12:22.452208 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 53172 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:12:22.453416 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:12:22.457470 systemd-logind[1419]: New session 7 of user core. Jul 2 09:12:22.465188 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 09:12:22.515653 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 09:12:22.515886 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:12:22.630194 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 09:12:22.630308 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 09:12:22.869879 dockerd[1629]: time="2024-07-02T09:12:22.869483296Z" level=info msg="Starting up" Jul 2 09:12:22.961094 dockerd[1629]: time="2024-07-02T09:12:22.961046336Z" level=info msg="Loading containers: start." Jul 2 09:12:23.059068 kernel: Initializing XFRM netlink socket Jul 2 09:12:23.127894 systemd-networkd[1375]: docker0: Link UP Jul 2 09:12:23.138165 dockerd[1629]: time="2024-07-02T09:12:23.138133336Z" level=info msg="Loading containers: done." Jul 2 09:12:23.193583 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1745084749-merged.mount: Deactivated successfully. Jul 2 09:12:23.195189 dockerd[1629]: time="2024-07-02T09:12:23.194557736Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 09:12:23.195189 dockerd[1629]: time="2024-07-02T09:12:23.194760616Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 09:12:23.195189 dockerd[1629]: time="2024-07-02T09:12:23.194873176Z" level=info msg="Daemon has completed initialization" Jul 2 09:12:23.225418 dockerd[1629]: time="2024-07-02T09:12:23.225337976Z" level=info msg="API listen on /run/docker.sock" Jul 2 09:12:23.225516 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 09:12:23.823068 containerd[1440]: time="2024-07-02T09:12:23.822992576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 09:12:24.476502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520257266.mount: Deactivated successfully. Jul 2 09:12:25.588922 containerd[1440]: time="2024-07-02T09:12:25.588873376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:25.589588 containerd[1440]: time="2024-07-02T09:12:25.589545536Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jul 2 09:12:25.591060 containerd[1440]: time="2024-07-02T09:12:25.590995696Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:25.593484 containerd[1440]: time="2024-07-02T09:12:25.593416696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:25.594671 containerd[1440]: time="2024-07-02T09:12:25.594641856Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 1.77160976s" Jul 2 09:12:25.594734 containerd[1440]: time="2024-07-02T09:12:25.594683536Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 09:12:25.614195 containerd[1440]: time="2024-07-02T09:12:25.614148496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 09:12:26.620149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 09:12:26.643455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:12:26.733928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:12:26.737523 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:12:26.787162 kubelet[1841]: E0702 09:12:26.787071 1841 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:12:26.791771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:12:26.791912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:12:27.022470 containerd[1440]: time="2024-07-02T09:12:27.022342936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:27.022867 containerd[1440]: time="2024-07-02T09:12:27.022829736Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jul 2 09:12:27.023977 containerd[1440]: time="2024-07-02T09:12:27.023936176Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:27.027110 containerd[1440]: time="2024-07-02T09:12:27.027070736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:27.028078 containerd[1440]: time="2024-07-02T09:12:27.028010776Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.4138164s" Jul 2 09:12:27.028078 containerd[1440]: time="2024-07-02T09:12:27.028069176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 09:12:27.048137 containerd[1440]: time="2024-07-02T09:12:27.048097296Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 09:12:27.949751 containerd[1440]: time="2024-07-02T09:12:27.949695416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:27.950104 containerd[1440]: time="2024-07-02T09:12:27.950076016Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jul 2 09:12:27.951130 containerd[1440]: time="2024-07-02T09:12:27.951067176Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:27.954857 containerd[1440]: time="2024-07-02T09:12:27.954814536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:27.955790 containerd[1440]: time="2024-07-02T09:12:27.955755096Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 907.62032ms" Jul 2 09:12:27.955832 containerd[1440]: time="2024-07-02T09:12:27.955788456Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 09:12:27.974513 containerd[1440]: time="2024-07-02T09:12:27.974478536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 09:12:29.388321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606533647.mount: Deactivated successfully. Jul 2 09:12:29.701811 containerd[1440]: time="2024-07-02T09:12:29.701687736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:29.702340 containerd[1440]: time="2024-07-02T09:12:29.702291576Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jul 2 09:12:29.702972 containerd[1440]: time="2024-07-02T09:12:29.702917896Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:29.705344 containerd[1440]: time="2024-07-02T09:12:29.705295736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:29.705980 containerd[1440]: time="2024-07-02T09:12:29.705751216Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.73123348s" Jul 2 09:12:29.705980 containerd[1440]: time="2024-07-02T09:12:29.705788616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 09:12:29.724185 containerd[1440]: time="2024-07-02T09:12:29.724144696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 09:12:30.169027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456266490.mount: Deactivated successfully. Jul 2 09:12:30.173112 containerd[1440]: time="2024-07-02T09:12:30.173063656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:30.173835 containerd[1440]: time="2024-07-02T09:12:30.173586656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 09:12:30.174843 containerd[1440]: time="2024-07-02T09:12:30.174609936Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:30.177287 containerd[1440]: time="2024-07-02T09:12:30.177248536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:30.178840 containerd[1440]: time="2024-07-02T09:12:30.178804616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 454.61536ms" Jul 2 09:12:30.178888 containerd[1440]: time="2024-07-02T09:12:30.178839736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 09:12:30.197250 containerd[1440]: time="2024-07-02T09:12:30.197197696Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 09:12:30.850684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100986126.mount: Deactivated successfully. Jul 2 09:12:32.185725 containerd[1440]: time="2024-07-02T09:12:32.185664456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:32.186609 containerd[1440]: time="2024-07-02T09:12:32.186530336Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 09:12:32.187801 containerd[1440]: time="2024-07-02T09:12:32.187737976Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:32.190374 containerd[1440]: time="2024-07-02T09:12:32.190335976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:32.191678 containerd[1440]: time="2024-07-02T09:12:32.191647576Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.99440716s" Jul 2 09:12:32.191749 containerd[1440]: time="2024-07-02T09:12:32.191681496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 09:12:32.210055 containerd[1440]: time="2024-07-02T09:12:32.209958896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 09:12:32.753703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237185730.mount: Deactivated successfully. Jul 2 09:12:33.058782 containerd[1440]: time="2024-07-02T09:12:33.058672136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:33.059309 containerd[1440]: time="2024-07-02T09:12:33.059278496Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jul 2 09:12:33.060090 containerd[1440]: time="2024-07-02T09:12:33.060027976Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:33.062125 containerd[1440]: time="2024-07-02T09:12:33.062089536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:12:33.063130 containerd[1440]: time="2024-07-02T09:12:33.063101936Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 853.08624ms" Jul 2 09:12:33.063181 containerd[1440]: time="2024-07-02T09:12:33.063136856Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 09:12:37.042343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 09:12:37.053272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:12:37.142607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:12:37.146072 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:12:37.182476 kubelet[2030]: E0702 09:12:37.182424 2030 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:12:37.185499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:12:37.185636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:12:38.507846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:12:38.517338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:12:38.535064 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-7.scope)... Jul 2 09:12:38.535080 systemd[1]: Reloading... Jul 2 09:12:38.595159 zram_generator::config[2081]: No configuration found. Jul 2 09:12:38.708583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:12:38.767168 systemd[1]: Reloading finished in 231 ms. Jul 2 09:12:38.800533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:12:38.803541 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:12:38.803718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:12:38.805068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:12:38.890949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:12:38.895245 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:12:38.934846 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:12:38.934846 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:12:38.934846 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:12:38.935157 kubelet[2130]: I0702 09:12:38.934879 2130 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:12:39.586304 kubelet[2130]: I0702 09:12:39.586275 2130 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 09:12:39.587319 kubelet[2130]: I0702 09:12:39.586449 2130 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:12:39.587319 kubelet[2130]: I0702 09:12:39.586617 2130 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 09:12:39.603601 kubelet[2130]: I0702 09:12:39.603583 2130 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:12:39.606959 kubelet[2130]: E0702 09:12:39.606916 2130 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.616436 kubelet[2130]: W0702 09:12:39.616415 2130 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 09:12:39.617216 kubelet[2130]: I0702 09:12:39.617197 2130 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:12:39.617483 kubelet[2130]: I0702 09:12:39.617466 2130 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:12:39.617710 kubelet[2130]: I0702 09:12:39.617692 2130 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:12:39.617842 kubelet[2130]: I0702 09:12:39.617830 2130 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:12:39.617894 kubelet[2130]: I0702 09:12:39.617886 2130 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:12:39.618128 kubelet[2130]: I0702 09:12:39.618111 2130 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:12:39.619305 kubelet[2130]: I0702 09:12:39.619284 2130 kubelet.go:393] "Attempting to sync node with API server" Jul 2 09:12:39.619409 kubelet[2130]: I0702 09:12:39.619383 2130 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:12:39.619569 kubelet[2130]: I0702 09:12:39.619559 2130 kubelet.go:309] "Adding apiserver pod source" Jul 2 09:12:39.622160 kubelet[2130]: W0702 09:12:39.619790 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.622160 kubelet[2130]: E0702 09:12:39.622148 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.622160 kubelet[2130]: I0702 09:12:39.622111 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:12:39.622551 kubelet[2130]: W0702 09:12:39.622505 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.622551 kubelet[2130]: E0702 09:12:39.622545 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.623402 kubelet[2130]: I0702 09:12:39.623379 2130 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:12:39.628125 kubelet[2130]: W0702 09:12:39.628095 2130 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 09:12:39.628703 kubelet[2130]: I0702 09:12:39.628678 2130 server.go:1232] "Started kubelet" Jul 2 09:12:39.629123 kubelet[2130]: I0702 09:12:39.628803 2130 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:12:39.629628 kubelet[2130]: E0702 09:12:39.629611 2130 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 09:12:39.630466 kubelet[2130]: E0702 09:12:39.630450 2130 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:12:39.630572 kubelet[2130]: I0702 09:12:39.629917 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:12:39.631453 kubelet[2130]: E0702 09:12:39.630983 2130 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de5a7374c11a88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 9, 12, 39, 628659336, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 9, 12, 39, 628659336, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.88:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.88:6443: connect: connection refused'(may retry after sleeping) Jul 2 09:12:39.631453 kubelet[2130]: I0702 09:12:39.631103 2130 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:12:39.631453 kubelet[2130]: I0702 09:12:39.630174 2130 server.go:462] "Adding debug handlers to kubelet server" Jul 2 09:12:39.631641 kubelet[2130]: E0702 09:12:39.631490 2130 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:12:39.631731 kubelet[2130]: E0702 09:12:39.631701 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="200ms" Jul 2 09:12:39.631970 kubelet[2130]: I0702 09:12:39.630387 2130 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 09:12:39.632119 kubelet[2130]: I0702 09:12:39.632066 2130 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:12:39.632119 kubelet[2130]: I0702 09:12:39.632100 2130 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:12:39.632187 kubelet[2130]: I0702 09:12:39.632149 2130 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:12:39.632228 kubelet[2130]: W0702 09:12:39.632189 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.632259 kubelet[2130]: E0702 09:12:39.632233 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.645274 kubelet[2130]: I0702 09:12:39.645186 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:12:39.646341 kubelet[2130]: I0702 09:12:39.646314 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:12:39.646341 kubelet[2130]: I0702 09:12:39.646335 2130 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:12:39.646480 kubelet[2130]: I0702 09:12:39.646463 2130 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 09:12:39.646516 kubelet[2130]: E0702 09:12:39.646511 2130 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:12:39.647422 kubelet[2130]: W0702 09:12:39.647221 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.647422 kubelet[2130]: E0702 09:12:39.647270 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:39.650653 kubelet[2130]: I0702 09:12:39.650621 2130 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:12:39.650653 kubelet[2130]: I0702 09:12:39.650638 2130 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:12:39.650653 kubelet[2130]: I0702 09:12:39.650653 2130 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:12:39.715554 kubelet[2130]: I0702 09:12:39.715509 2130 policy_none.go:49] "None policy: Start" Jul 2 09:12:39.716233 kubelet[2130]: I0702 09:12:39.716204 2130 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 09:12:39.716290 kubelet[2130]: I0702 09:12:39.716264 2130 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:12:39.722635 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 09:12:39.732918 kubelet[2130]: I0702 09:12:39.732890 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:12:39.733428 kubelet[2130]: E0702 09:12:39.733409 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jul 2 09:12:39.736471 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 09:12:39.738931 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 09:12:39.746835 kubelet[2130]: E0702 09:12:39.746813 2130 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 09:12:39.750696 kubelet[2130]: I0702 09:12:39.750575 2130 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:12:39.750893 kubelet[2130]: I0702 09:12:39.750798 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:12:39.751201 kubelet[2130]: E0702 09:12:39.751149 2130 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 09:12:39.832561 kubelet[2130]: E0702 09:12:39.832517 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="400ms" Jul 2 09:12:39.934645 kubelet[2130]: I0702 09:12:39.934590 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:12:39.935688 kubelet[2130]: E0702 09:12:39.935664 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jul 2 09:12:39.948075 kubelet[2130]: I0702 09:12:39.947615 2130 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:12:39.948487 kubelet[2130]: I0702 09:12:39.948458 2130 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:12:39.953506 kubelet[2130]: I0702 09:12:39.950901 2130 topology_manager.go:215] "Topology Admit Handler" podUID="d15b46c0915fca66d723fae26fc5b8f0" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:12:39.960003 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jul 2 09:12:39.976590 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jul 2 09:12:39.989876 systemd[1]: Created slice kubepods-burstable-podd15b46c0915fca66d723fae26fc5b8f0.slice - libcontainer container kubepods-burstable-podd15b46c0915fca66d723fae26fc5b8f0.slice. Jul 2 09:12:40.034577 kubelet[2130]: I0702 09:12:40.034541 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:40.034577 kubelet[2130]: I0702 09:12:40.034582 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:40.034684 kubelet[2130]: I0702 09:12:40.034602 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:40.034684 kubelet[2130]: I0702 09:12:40.034621 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:40.034684 kubelet[2130]: I0702 09:12:40.034643 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:12:40.034684 kubelet[2130]: I0702 09:12:40.034663 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d15b46c0915fca66d723fae26fc5b8f0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d15b46c0915fca66d723fae26fc5b8f0\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:12:40.034684 kubelet[2130]: I0702 09:12:40.034682 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d15b46c0915fca66d723fae26fc5b8f0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d15b46c0915fca66d723fae26fc5b8f0\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:12:40.034782 kubelet[2130]: I0702 09:12:40.034701 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:40.034782 kubelet[2130]: I0702 09:12:40.034735 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d15b46c0915fca66d723fae26fc5b8f0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d15b46c0915fca66d723fae26fc5b8f0\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:12:40.233948 kubelet[2130]: E0702 09:12:40.233840 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="800ms" Jul 2 09:12:40.275298 kubelet[2130]: E0702 09:12:40.275220 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:40.275836 containerd[1440]: time="2024-07-02T09:12:40.275801296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 09:12:40.288143 kubelet[2130]: E0702 09:12:40.288113 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:40.288439 containerd[1440]: time="2024-07-02T09:12:40.288408176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 09:12:40.291824 kubelet[2130]: E0702 09:12:40.291736 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:40.292117 containerd[1440]: time="2024-07-02T09:12:40.292083616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d15b46c0915fca66d723fae26fc5b8f0,Namespace:kube-system,Attempt:0,}" Jul 2 09:12:40.337322 kubelet[2130]: I0702 09:12:40.337302 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:12:40.337580 kubelet[2130]: E0702 09:12:40.337566 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jul 2 09:12:40.748525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52159220.mount: Deactivated successfully. Jul 2 09:12:40.752925 containerd[1440]: time="2024-07-02T09:12:40.752881176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:12:40.754868 containerd[1440]: time="2024-07-02T09:12:40.754837736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 09:12:40.755441 containerd[1440]: time="2024-07-02T09:12:40.755409616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:12:40.756481 containerd[1440]: time="2024-07-02T09:12:40.756427776Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:12:40.756658 containerd[1440]: time="2024-07-02T09:12:40.756628136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:12:40.757319 containerd[1440]: time="2024-07-02T09:12:40.757246136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:12:40.758791 containerd[1440]: time="2024-07-02T09:12:40.757726696Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:12:40.760639 kubelet[2130]: W0702 09:12:40.760542 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:40.760639 kubelet[2130]: E0702 09:12:40.760588 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:40.762041 containerd[1440]: time="2024-07-02T09:12:40.762002096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:12:40.762848 containerd[1440]: time="2024-07-02T09:12:40.762815616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 486.91928ms" Jul 2 09:12:40.763593 containerd[1440]: time="2024-07-02T09:12:40.763563336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 471.32272ms" Jul 2 09:12:40.766060 containerd[1440]: time="2024-07-02T09:12:40.765988696Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 477.50748ms" Jul 2 09:12:40.787030 kubelet[2130]: W0702 09:12:40.786431 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:40.787030 kubelet[2130]: E0702 09:12:40.786491 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:40.924685 containerd[1440]: time="2024-07-02T09:12:40.924573296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:12:40.924685 containerd[1440]: time="2024-07-02T09:12:40.924646536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:12:40.925083 containerd[1440]: time="2024-07-02T09:12:40.924960976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:12:40.925194 containerd[1440]: time="2024-07-02T09:12:40.925067736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:12:40.925265 containerd[1440]: time="2024-07-02T09:12:40.925182336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:12:40.925323 containerd[1440]: time="2024-07-02T09:12:40.925300416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:12:40.925674 containerd[1440]: time="2024-07-02T09:12:40.925540496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:12:40.925674 containerd[1440]: time="2024-07-02T09:12:40.925612976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:12:40.927502 containerd[1440]: time="2024-07-02T09:12:40.927410816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:12:40.927502 containerd[1440]: time="2024-07-02T09:12:40.927484296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:12:40.927660 containerd[1440]: time="2024-07-02T09:12:40.927617176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:12:40.927745 containerd[1440]: time="2024-07-02T09:12:40.927713256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:12:40.947084 kubelet[2130]: W0702 09:12:40.947016 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:40.947084 kubelet[2130]: E0702 09:12:40.947085 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:40.950202 systemd[1]: Started cri-containerd-47370c8624bad3f1bcab397f28b854872e7f199d515a50d4f404866804f3f64e.scope - libcontainer container 47370c8624bad3f1bcab397f28b854872e7f199d515a50d4f404866804f3f64e. Jul 2 09:12:40.954948 systemd[1]: Started cri-containerd-5904a8e21e7502539582c4ac8b04e5fc38569d03e430eea40d4b191024f6b178.scope - libcontainer container 5904a8e21e7502539582c4ac8b04e5fc38569d03e430eea40d4b191024f6b178. Jul 2 09:12:40.956924 systemd[1]: Started cri-containerd-879bcb6d66d3668dc4939ca8da1fb8aa18de24a43d05b5bb082179310839f352.scope - libcontainer container 879bcb6d66d3668dc4939ca8da1fb8aa18de24a43d05b5bb082179310839f352. Jul 2 09:12:40.980508 containerd[1440]: time="2024-07-02T09:12:40.980451056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d15b46c0915fca66d723fae26fc5b8f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"47370c8624bad3f1bcab397f28b854872e7f199d515a50d4f404866804f3f64e\"" Jul 2 09:12:40.981267 kubelet[2130]: E0702 09:12:40.981243 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:40.983781 containerd[1440]: time="2024-07-02T09:12:40.983650136Z" level=info msg="CreateContainer within sandbox \"47370c8624bad3f1bcab397f28b854872e7f199d515a50d4f404866804f3f64e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 09:12:40.992795 containerd[1440]: time="2024-07-02T09:12:40.992761136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5904a8e21e7502539582c4ac8b04e5fc38569d03e430eea40d4b191024f6b178\"" Jul 2 09:12:40.993542 kubelet[2130]: E0702 09:12:40.993519 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:40.995254 containerd[1440]: time="2024-07-02T09:12:40.995201496Z" level=info msg="CreateContainer within sandbox \"5904a8e21e7502539582c4ac8b04e5fc38569d03e430eea40d4b191024f6b178\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 09:12:40.997308 containerd[1440]: time="2024-07-02T09:12:40.997092016Z" level=info msg="CreateContainer within sandbox \"47370c8624bad3f1bcab397f28b854872e7f199d515a50d4f404866804f3f64e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a2bfa69c92087365affc903b11298976552c39d9772839770e2e831ff422aba\"" Jul 2 09:12:40.998020 containerd[1440]: time="2024-07-02T09:12:40.997963776Z" level=info msg="StartContainer for \"6a2bfa69c92087365affc903b11298976552c39d9772839770e2e831ff422aba\"" Jul 2 09:12:41.001021 containerd[1440]: time="2024-07-02T09:12:41.000849256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"879bcb6d66d3668dc4939ca8da1fb8aa18de24a43d05b5bb082179310839f352\"" Jul 2 09:12:41.001780 kubelet[2130]: E0702 09:12:41.001449 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:41.003456 containerd[1440]: time="2024-07-02T09:12:41.003367496Z" level=info msg="CreateContainer within sandbox \"879bcb6d66d3668dc4939ca8da1fb8aa18de24a43d05b5bb082179310839f352\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 09:12:41.010248 containerd[1440]: time="2024-07-02T09:12:41.010212176Z" level=info msg="CreateContainer within sandbox \"5904a8e21e7502539582c4ac8b04e5fc38569d03e430eea40d4b191024f6b178\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e97981aff1808269277c4430c3fa264aa15a7bae2f96888ca9a65cb4827fe63\"" Jul 2 09:12:41.010577 containerd[1440]: time="2024-07-02T09:12:41.010543696Z" level=info msg="StartContainer for \"2e97981aff1808269277c4430c3fa264aa15a7bae2f96888ca9a65cb4827fe63\"" Jul 2 09:12:41.018442 containerd[1440]: time="2024-07-02T09:12:41.018405096Z" level=info msg="CreateContainer within sandbox \"879bcb6d66d3668dc4939ca8da1fb8aa18de24a43d05b5bb082179310839f352\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e4872669f4fd3c83f7afed13d2638878b8845da59094628d0264cb199a6ea81\"" Jul 2 09:12:41.021360 containerd[1440]: time="2024-07-02T09:12:41.021291776Z" level=info msg="StartContainer for \"0e4872669f4fd3c83f7afed13d2638878b8845da59094628d0264cb199a6ea81\"" Jul 2 09:12:41.025226 systemd[1]: Started cri-containerd-6a2bfa69c92087365affc903b11298976552c39d9772839770e2e831ff422aba.scope - libcontainer container 6a2bfa69c92087365affc903b11298976552c39d9772839770e2e831ff422aba. Jul 2 09:12:41.033211 kubelet[2130]: W0702 09:12:41.033079 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:41.033372 kubelet[2130]: E0702 09:12:41.033303 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Jul 2 09:12:41.035185 kubelet[2130]: E0702 09:12:41.035155 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="1.6s" Jul 2 09:12:41.038216 systemd[1]: Started cri-containerd-2e97981aff1808269277c4430c3fa264aa15a7bae2f96888ca9a65cb4827fe63.scope - libcontainer container 2e97981aff1808269277c4430c3fa264aa15a7bae2f96888ca9a65cb4827fe63. Jul 2 09:12:41.054211 systemd[1]: Started cri-containerd-0e4872669f4fd3c83f7afed13d2638878b8845da59094628d0264cb199a6ea81.scope - libcontainer container 0e4872669f4fd3c83f7afed13d2638878b8845da59094628d0264cb199a6ea81. Jul 2 09:12:41.068224 containerd[1440]: time="2024-07-02T09:12:41.068171616Z" level=info msg="StartContainer for \"6a2bfa69c92087365affc903b11298976552c39d9772839770e2e831ff422aba\" returns successfully" Jul 2 09:12:41.097371 containerd[1440]: time="2024-07-02T09:12:41.097282816Z" level=info msg="StartContainer for \"2e97981aff1808269277c4430c3fa264aa15a7bae2f96888ca9a65cb4827fe63\" returns successfully" Jul 2 09:12:41.115735 containerd[1440]: time="2024-07-02T09:12:41.110916536Z" level=info msg="StartContainer for \"0e4872669f4fd3c83f7afed13d2638878b8845da59094628d0264cb199a6ea81\" returns successfully" Jul 2 09:12:41.143619 kubelet[2130]: I0702 09:12:41.143596 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:12:41.144071 kubelet[2130]: E0702 09:12:41.144023 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Jul 2 09:12:41.654965 kubelet[2130]: E0702 09:12:41.654882 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:41.658102 kubelet[2130]: E0702 09:12:41.658024 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:41.658292 kubelet[2130]: E0702 09:12:41.658276 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:42.659716 kubelet[2130]: E0702 09:12:42.659651 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:42.745235 kubelet[2130]: I0702 09:12:42.745214 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:12:43.361995 kubelet[2130]: E0702 09:12:43.361946 2130 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 09:12:43.430301 kubelet[2130]: I0702 09:12:43.430252 2130 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 09:12:43.624778 kubelet[2130]: I0702 09:12:43.624460 2130 apiserver.go:52] "Watching apiserver" Jul 2 09:12:43.633215 kubelet[2130]: I0702 09:12:43.633186 2130 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:12:43.664880 kubelet[2130]: E0702 09:12:43.664851 2130 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 09:12:43.666073 kubelet[2130]: E0702 09:12:43.665982 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:45.232550 kubelet[2130]: E0702 09:12:45.231944 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:45.664217 kubelet[2130]: E0702 09:12:45.664114 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:46.021676 systemd[1]: Reloading requested from client PID 2413 ('systemctl') (unit session-7.scope)... Jul 2 09:12:46.021698 systemd[1]: Reloading... Jul 2 09:12:46.084065 zram_generator::config[2453]: No configuration found. Jul 2 09:12:46.171071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:12:46.242610 systemd[1]: Reloading finished in 220 ms. Jul 2 09:12:46.281469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:12:46.282110 kubelet[2130]: I0702 09:12:46.281493 2130 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:12:46.300167 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:12:46.300382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:12:46.300487 systemd[1]: kubelet.service: Consumed 1.096s CPU time, 116.8M memory peak, 0B memory swap peak. Jul 2 09:12:46.314548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:12:46.413378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:12:46.419857 (kubelet)[2492]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:12:46.471151 kubelet[2492]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:12:46.471151 kubelet[2492]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:12:46.471151 kubelet[2492]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:12:46.471151 kubelet[2492]: I0702 09:12:46.469796 2492 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:12:46.474753 kubelet[2492]: I0702 09:12:46.474701 2492 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 09:12:46.474753 kubelet[2492]: I0702 09:12:46.474727 2492 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:12:46.475063 kubelet[2492]: I0702 09:12:46.474900 2492 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 09:12:46.476422 kubelet[2492]: I0702 09:12:46.476372 2492 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 09:12:46.478526 kubelet[2492]: I0702 09:12:46.477398 2492 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:12:46.480947 kubelet[2492]: W0702 09:12:46.480932 2492 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 09:12:46.481657 kubelet[2492]: I0702 09:12:46.481612 2492 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:12:46.481786 kubelet[2492]: I0702 09:12:46.481776 2492 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:12:46.482132 kubelet[2492]: I0702 09:12:46.481913 2492 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:12:46.482132 kubelet[2492]: I0702 09:12:46.481936 2492 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:12:46.482132 kubelet[2492]: I0702 09:12:46.481944 2492 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:12:46.482132 kubelet[2492]: I0702 09:12:46.481973 2492 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:12:46.482132 kubelet[2492]: I0702 09:12:46.482101 2492 kubelet.go:393] "Attempting to sync node with API server" Jul 2 09:12:46.482132 kubelet[2492]: I0702 09:12:46.482115 2492 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:12:46.486076 kubelet[2492]: I0702 09:12:46.482569 2492 kubelet.go:309] "Adding apiserver pod source" Jul 2 09:12:46.486076 kubelet[2492]: I0702 09:12:46.482590 2492 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:12:46.493554 kubelet[2492]: I0702 09:12:46.493494 2492 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:12:46.493919 kubelet[2492]: I0702 09:12:46.493899 2492 server.go:1232] "Started kubelet" Jul 2 09:12:46.494939 kubelet[2492]: I0702 09:12:46.494916 2492 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:12:46.495600 kubelet[2492]: E0702 09:12:46.495580 2492 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 09:12:46.495663 kubelet[2492]: E0702 09:12:46.495607 2492 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:12:46.506268 kubelet[2492]: I0702 09:12:46.504220 2492 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:12:46.513463 kubelet[2492]: I0702 09:12:46.513441 2492 server.go:462] "Adding debug handlers to kubelet server" Jul 2 09:12:46.516390 kubelet[2492]: I0702 09:12:46.514749 2492 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 09:12:46.516579 kubelet[2492]: I0702 09:12:46.515114 2492 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:12:46.516579 kubelet[2492]: I0702 09:12:46.516521 2492 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:12:46.516579 kubelet[2492]: I0702 09:12:46.515223 2492 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:12:46.516666 kubelet[2492]: I0702 09:12:46.515237 2492 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:12:46.516704 kubelet[2492]: I0702 09:12:46.516685 2492 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:12:46.519707 kubelet[2492]: I0702 09:12:46.519689 2492 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:12:46.519872 kubelet[2492]: I0702 09:12:46.519797 2492 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:12:46.519872 kubelet[2492]: I0702 09:12:46.519822 2492 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 09:12:46.520208 kubelet[2492]: E0702 09:12:46.519961 2492 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:12:46.568699 kubelet[2492]: I0702 09:12:46.568615 2492 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:12:46.568699 kubelet[2492]: I0702 09:12:46.568637 2492 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:12:46.568699 kubelet[2492]: I0702 09:12:46.568654 2492 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:12:46.568832 kubelet[2492]: I0702 09:12:46.568780 2492 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 09:12:46.568832 kubelet[2492]: I0702 09:12:46.568798 2492 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 09:12:46.568832 kubelet[2492]: I0702 09:12:46.568805 2492 policy_none.go:49] "None policy: Start" Jul 2 09:12:46.569865 kubelet[2492]: I0702 09:12:46.569769 2492 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 09:12:46.569865 kubelet[2492]: I0702 09:12:46.569795 2492 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:12:46.569960 kubelet[2492]: I0702 09:12:46.569937 2492 state_mem.go:75] "Updated machine memory state" Jul 2 09:12:46.575418 kubelet[2492]: I0702 09:12:46.574760 2492 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:12:46.575418 kubelet[2492]: I0702 09:12:46.574974 2492 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:12:46.619527 kubelet[2492]: I0702 09:12:46.619504 2492 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:12:46.620158 kubelet[2492]: I0702 09:12:46.620124 2492 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:12:46.620223 kubelet[2492]: I0702 09:12:46.620214 2492 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:12:46.620264 kubelet[2492]: I0702 09:12:46.620247 2492 topology_manager.go:215] "Topology Admit Handler" podUID="d15b46c0915fca66d723fae26fc5b8f0" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:12:46.631877 kubelet[2492]: E0702 09:12:46.629152 2492 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:46.631877 kubelet[2492]: I0702 09:12:46.630976 2492 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 09:12:46.631877 kubelet[2492]: I0702 09:12:46.631092 2492 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 09:12:46.819485 kubelet[2492]: I0702 09:12:46.818004 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:46.819485 kubelet[2492]: I0702 09:12:46.818068 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:12:46.819485 kubelet[2492]: I0702 09:12:46.818098 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d15b46c0915fca66d723fae26fc5b8f0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d15b46c0915fca66d723fae26fc5b8f0\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:12:46.819485 kubelet[2492]: I0702 09:12:46.818118 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:46.819485 kubelet[2492]: I0702 09:12:46.818140 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:46.819669 kubelet[2492]: I0702 09:12:46.818159 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:46.819669 kubelet[2492]: I0702 09:12:46.818178 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:46.819669 kubelet[2492]: I0702 09:12:46.818199 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d15b46c0915fca66d723fae26fc5b8f0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d15b46c0915fca66d723fae26fc5b8f0\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:12:46.819669 kubelet[2492]: I0702 09:12:46.818243 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d15b46c0915fca66d723fae26fc5b8f0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d15b46c0915fca66d723fae26fc5b8f0\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:12:46.927902 kubelet[2492]: E0702 09:12:46.926740 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:46.927902 kubelet[2492]: E0702 09:12:46.927175 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:46.929778 kubelet[2492]: E0702 09:12:46.929746 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:47.483702 kubelet[2492]: I0702 09:12:47.483655 2492 apiserver.go:52] "Watching apiserver" Jul 2 09:12:47.517219 kubelet[2492]: I0702 09:12:47.517185 2492 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:12:47.532514 kubelet[2492]: E0702 09:12:47.532394 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:47.541598 kubelet[2492]: E0702 09:12:47.541567 2492 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 09:12:47.541947 kubelet[2492]: E0702 09:12:47.541787 2492 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 09:12:47.542404 kubelet[2492]: E0702 09:12:47.542336 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:47.542550 kubelet[2492]: E0702 09:12:47.542452 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:47.576379 kubelet[2492]: I0702 09:12:47.576341 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.576292924 podCreationTimestamp="2024-07-02 09:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:12:47.572373797 +0000 UTC m=+1.148755302" watchObservedRunningTime="2024-07-02 09:12:47.576292924 +0000 UTC m=+1.152674429" Jul 2 09:12:47.594602 kubelet[2492]: I0702 09:12:47.594346 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5943073719999998 podCreationTimestamp="2024-07-02 09:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:12:47.585969722 +0000 UTC m=+1.162351227" watchObservedRunningTime="2024-07-02 09:12:47.594307372 +0000 UTC m=+1.170688877" Jul 2 09:12:47.612216 kubelet[2492]: I0702 09:12:47.612176 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6121390610000002 podCreationTimestamp="2024-07-02 09:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:12:47.59449029 +0000 UTC m=+1.170871795" watchObservedRunningTime="2024-07-02 09:12:47.612139061 +0000 UTC m=+1.188520566" Jul 2 09:12:48.534094 kubelet[2492]: E0702 09:12:48.533804 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:48.534094 kubelet[2492]: E0702 09:12:48.533935 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:49.535003 kubelet[2492]: E0702 09:12:49.534945 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:51.387499 sudo[1618]: pam_unix(sudo:session): session closed for user root Jul 2 09:12:51.454730 sshd[1614]: pam_unix(sshd:session): session closed for user core Jul 2 09:12:51.457637 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:53172.service: Deactivated successfully. Jul 2 09:12:51.459349 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 09:12:51.459547 systemd[1]: session-7.scope: Consumed 7.594s CPU time, 136.6M memory peak, 0B memory swap peak. Jul 2 09:12:51.460808 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Jul 2 09:12:51.463769 systemd-logind[1419]: Removed session 7. Jul 2 09:12:51.749977 kubelet[2492]: E0702 09:12:51.749843 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:52.540775 kubelet[2492]: E0702 09:12:52.539707 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:54.688214 kubelet[2492]: E0702 09:12:54.687749 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:55.544182 kubelet[2492]: E0702 09:12:55.544146 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:58.373670 kubelet[2492]: E0702 09:12:58.373642 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:12:59.217217 kubelet[2492]: I0702 09:12:59.217181 2492 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 09:12:59.217561 containerd[1440]: time="2024-07-02T09:12:59.217521908Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 09:12:59.217825 kubelet[2492]: I0702 09:12:59.217699 2492 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 09:12:59.717853 update_engine[1421]: I0702 09:12:59.717804 1421 update_attempter.cc:509] Updating boot flags... Jul 2 09:12:59.739375 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2590) Jul 2 09:12:59.773282 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2588) Jul 2 09:12:59.954679 kubelet[2492]: I0702 09:12:59.954642 2492 topology_manager.go:215] "Topology Admit Handler" podUID="69af4df4-28ed-45d0-a71e-a52de3544261" podNamespace="kube-system" podName="kube-proxy-kt2jq" Jul 2 09:12:59.963244 systemd[1]: Created slice kubepods-besteffort-pod69af4df4_28ed_45d0_a71e_a52de3544261.slice - libcontainer container kubepods-besteffort-pod69af4df4_28ed_45d0_a71e_a52de3544261.slice. Jul 2 09:13:00.101901 kubelet[2492]: I0702 09:13:00.101806 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/69af4df4-28ed-45d0-a71e-a52de3544261-kube-proxy\") pod \"kube-proxy-kt2jq\" (UID: \"69af4df4-28ed-45d0-a71e-a52de3544261\") " pod="kube-system/kube-proxy-kt2jq" Jul 2 09:13:00.101989 kubelet[2492]: I0702 09:13:00.101932 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69af4df4-28ed-45d0-a71e-a52de3544261-lib-modules\") pod \"kube-proxy-kt2jq\" (UID: \"69af4df4-28ed-45d0-a71e-a52de3544261\") " pod="kube-system/kube-proxy-kt2jq" Jul 2 09:13:00.102018 kubelet[2492]: I0702 09:13:00.102009 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69af4df4-28ed-45d0-a71e-a52de3544261-xtables-lock\") pod \"kube-proxy-kt2jq\" (UID: \"69af4df4-28ed-45d0-a71e-a52de3544261\") " pod="kube-system/kube-proxy-kt2jq" Jul 2 09:13:00.102151 kubelet[2492]: I0702 09:13:00.102076 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgl5\" (UniqueName: \"kubernetes.io/projected/69af4df4-28ed-45d0-a71e-a52de3544261-kube-api-access-gvgl5\") pod \"kube-proxy-kt2jq\" (UID: \"69af4df4-28ed-45d0-a71e-a52de3544261\") " pod="kube-system/kube-proxy-kt2jq" Jul 2 09:13:00.226060 kubelet[2492]: I0702 09:13:00.225721 2492 topology_manager.go:215] "Topology Admit Handler" podUID="59cbc2bb-813c-4cd2-a453-5f61f9a9f9b6" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-485rw" Jul 2 09:13:00.236062 systemd[1]: Created slice kubepods-besteffort-pod59cbc2bb_813c_4cd2_a453_5f61f9a9f9b6.slice - libcontainer container kubepods-besteffort-pod59cbc2bb_813c_4cd2_a453_5f61f9a9f9b6.slice. Jul 2 09:13:00.274842 kubelet[2492]: E0702 09:13:00.274799 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:00.275387 containerd[1440]: time="2024-07-02T09:13:00.275317463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kt2jq,Uid:69af4df4-28ed-45d0-a71e-a52de3544261,Namespace:kube-system,Attempt:0,}" Jul 2 09:13:00.296069 containerd[1440]: time="2024-07-02T09:13:00.295142550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:13:00.296069 containerd[1440]: time="2024-07-02T09:13:00.295190470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:00.296069 containerd[1440]: time="2024-07-02T09:13:00.295203910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:13:00.296069 containerd[1440]: time="2024-07-02T09:13:00.295224070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:00.312195 systemd[1]: Started cri-containerd-03875688c7186cb937ee639e241bebe00ee9a2e2cd71bae3ce0d51758a2bf43f.scope - libcontainer container 03875688c7186cb937ee639e241bebe00ee9a2e2cd71bae3ce0d51758a2bf43f. Jul 2 09:13:00.330239 containerd[1440]: time="2024-07-02T09:13:00.330200183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kt2jq,Uid:69af4df4-28ed-45d0-a71e-a52de3544261,Namespace:kube-system,Attempt:0,} returns sandbox id \"03875688c7186cb937ee639e241bebe00ee9a2e2cd71bae3ce0d51758a2bf43f\"" Jul 2 09:13:00.331057 kubelet[2492]: E0702 09:13:00.330999 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:00.335909 containerd[1440]: time="2024-07-02T09:13:00.335725642Z" level=info msg="CreateContainer within sandbox \"03875688c7186cb937ee639e241bebe00ee9a2e2cd71bae3ce0d51758a2bf43f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 09:13:00.348277 containerd[1440]: time="2024-07-02T09:13:00.348241677Z" level=info msg="CreateContainer within sandbox \"03875688c7186cb937ee639e241bebe00ee9a2e2cd71bae3ce0d51758a2bf43f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a0d66a68e40a228c2c0912a03722eac1238661593781672c9786312059447b1\"" Jul 2 09:13:00.349469 containerd[1440]: time="2024-07-02T09:13:00.348809275Z" level=info msg="StartContainer for \"8a0d66a68e40a228c2c0912a03722eac1238661593781672c9786312059447b1\"" Jul 2 09:13:00.374232 systemd[1]: Started cri-containerd-8a0d66a68e40a228c2c0912a03722eac1238661593781672c9786312059447b1.scope - libcontainer container 8a0d66a68e40a228c2c0912a03722eac1238661593781672c9786312059447b1. Jul 2 09:13:00.401823 containerd[1440]: time="2024-07-02T09:13:00.401752122Z" level=info msg="StartContainer for \"8a0d66a68e40a228c2c0912a03722eac1238661593781672c9786312059447b1\" returns successfully" Jul 2 09:13:00.404818 kubelet[2492]: I0702 09:13:00.404777 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59cbc2bb-813c-4cd2-a453-5f61f9a9f9b6-var-lib-calico\") pod \"tigera-operator-76c4974c85-485rw\" (UID: \"59cbc2bb-813c-4cd2-a453-5f61f9a9f9b6\") " pod="tigera-operator/tigera-operator-76c4974c85-485rw" Jul 2 09:13:00.404900 kubelet[2492]: I0702 09:13:00.404843 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sf4x\" (UniqueName: \"kubernetes.io/projected/59cbc2bb-813c-4cd2-a453-5f61f9a9f9b6-kube-api-access-6sf4x\") pod \"tigera-operator-76c4974c85-485rw\" (UID: \"59cbc2bb-813c-4cd2-a453-5f61f9a9f9b6\") " pod="tigera-operator/tigera-operator-76c4974c85-485rw" Jul 2 09:13:00.540018 containerd[1440]: time="2024-07-02T09:13:00.539959258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-485rw,Uid:59cbc2bb-813c-4cd2-a453-5f61f9a9f9b6,Namespace:tigera-operator,Attempt:0,}" Jul 2 09:13:00.553456 kubelet[2492]: E0702 09:13:00.553417 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:00.562477 kubelet[2492]: I0702 09:13:00.562422 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kt2jq" podStartSLOduration=1.562369736 podCreationTimestamp="2024-07-02 09:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:13:00.56130686 +0000 UTC m=+14.137688365" watchObservedRunningTime="2024-07-02 09:13:00.562369736 +0000 UTC m=+14.138751241" Jul 2 09:13:00.564276 containerd[1440]: time="2024-07-02T09:13:00.563660252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:13:00.564276 containerd[1440]: time="2024-07-02T09:13:00.564119850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:00.564276 containerd[1440]: time="2024-07-02T09:13:00.564143730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:13:00.564276 containerd[1440]: time="2024-07-02T09:13:00.564156650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:00.581196 systemd[1]: Started cri-containerd-09e81be31b9c845d03174bbe4a6962dbde5e57d5a41779ca7d6841caba835c01.scope - libcontainer container 09e81be31b9c845d03174bbe4a6962dbde5e57d5a41779ca7d6841caba835c01. Jul 2 09:13:00.606209 containerd[1440]: time="2024-07-02T09:13:00.606173177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-485rw,Uid:59cbc2bb-813c-4cd2-a453-5f61f9a9f9b6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09e81be31b9c845d03174bbe4a6962dbde5e57d5a41779ca7d6841caba835c01\"" Jul 2 09:13:00.609245 containerd[1440]: time="2024-07-02T09:13:00.609129966Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 09:13:01.623545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734000242.mount: Deactivated successfully. Jul 2 09:13:02.086138 containerd[1440]: time="2024-07-02T09:13:02.085845369Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:02.086447 containerd[1440]: time="2024-07-02T09:13:02.086239208Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473606" Jul 2 09:13:02.088512 containerd[1440]: time="2024-07-02T09:13:02.088466040Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:02.090928 containerd[1440]: time="2024-07-02T09:13:02.090885313Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:02.091963 containerd[1440]: time="2024-07-02T09:13:02.091910309Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.482574984s" Jul 2 09:13:02.091963 containerd[1440]: time="2024-07-02T09:13:02.091949669Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 09:13:02.095064 containerd[1440]: time="2024-07-02T09:13:02.095012899Z" level=info msg="CreateContainer within sandbox \"09e81be31b9c845d03174bbe4a6962dbde5e57d5a41779ca7d6841caba835c01\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 09:13:02.104915 containerd[1440]: time="2024-07-02T09:13:02.104868108Z" level=info msg="CreateContainer within sandbox \"09e81be31b9c845d03174bbe4a6962dbde5e57d5a41779ca7d6841caba835c01\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"37fe54d33e693a648d205e06bef1fb5b899cb3dc3a029c075d49134500a2b1b2\"" Jul 2 09:13:02.105795 containerd[1440]: time="2024-07-02T09:13:02.105606546Z" level=info msg="StartContainer for \"37fe54d33e693a648d205e06bef1fb5b899cb3dc3a029c075d49134500a2b1b2\"" Jul 2 09:13:02.135264 systemd[1]: Started cri-containerd-37fe54d33e693a648d205e06bef1fb5b899cb3dc3a029c075d49134500a2b1b2.scope - libcontainer container 37fe54d33e693a648d205e06bef1fb5b899cb3dc3a029c075d49134500a2b1b2. Jul 2 09:13:02.176304 containerd[1440]: time="2024-07-02T09:13:02.176134560Z" level=info msg="StartContainer for \"37fe54d33e693a648d205e06bef1fb5b899cb3dc3a029c075d49134500a2b1b2\" returns successfully" Jul 2 09:13:05.779179 kubelet[2492]: I0702 09:13:05.779130 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-485rw" podStartSLOduration=4.293687871 podCreationTimestamp="2024-07-02 09:13:00 +0000 UTC" firstStartedPulling="2024-07-02 09:13:00.607136813 +0000 UTC m=+14.183518318" lastFinishedPulling="2024-07-02 09:13:02.092525187 +0000 UTC m=+15.668906652" observedRunningTime="2024-07-02 09:13:02.573983525 +0000 UTC m=+16.150365030" watchObservedRunningTime="2024-07-02 09:13:05.779076205 +0000 UTC m=+19.355457710" Jul 2 09:13:05.779760 kubelet[2492]: I0702 09:13:05.779338 2492 topology_manager.go:215] "Topology Admit Handler" podUID="b947b2cd-e586-4b30-a3dc-e38d1bd209db" podNamespace="calico-system" podName="calico-typha-5bc58bdc74-8zpv6" Jul 2 09:13:05.795988 systemd[1]: Created slice kubepods-besteffort-podb947b2cd_e586_4b30_a3dc_e38d1bd209db.slice - libcontainer container kubepods-besteffort-podb947b2cd_e586_4b30_a3dc_e38d1bd209db.slice. Jul 2 09:13:05.822482 kubelet[2492]: I0702 09:13:05.822424 2492 topology_manager.go:215] "Topology Admit Handler" podUID="e9d46a84-7c18-4852-a2fc-721796054886" podNamespace="calico-system" podName="calico-node-bt6hv" Jul 2 09:13:05.831418 systemd[1]: Created slice kubepods-besteffort-pode9d46a84_7c18_4852_a2fc_721796054886.slice - libcontainer container kubepods-besteffort-pode9d46a84_7c18_4852_a2fc_721796054886.slice. Jul 2 09:13:05.934526 kubelet[2492]: I0702 09:13:05.932624 2492 topology_manager.go:215] "Topology Admit Handler" podUID="0e5ec1b3-c9b9-48d9-b154-86cd38677ba2" podNamespace="calico-system" podName="csi-node-driver-j6v9l" Jul 2 09:13:05.934526 kubelet[2492]: E0702 09:13:05.932884 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j6v9l" podUID="0e5ec1b3-c9b9-48d9-b154-86cd38677ba2" Jul 2 09:13:05.942063 kubelet[2492]: I0702 09:13:05.941355 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d46a84-7c18-4852-a2fc-721796054886-tigera-ca-bundle\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.942063 kubelet[2492]: I0702 09:13:05.941435 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-var-lib-calico\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.942063 kubelet[2492]: I0702 09:13:05.941460 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-cni-net-dir\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943292 kubelet[2492]: I0702 09:13:05.941509 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-flexvol-driver-host\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943292 kubelet[2492]: I0702 09:13:05.942363 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-policysync\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943292 kubelet[2492]: I0702 09:13:05.942399 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-cni-log-dir\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943292 kubelet[2492]: I0702 09:13:05.942429 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e5ec1b3-c9b9-48d9-b154-86cd38677ba2-socket-dir\") pod \"csi-node-driver-j6v9l\" (UID: \"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2\") " pod="calico-system/csi-node-driver-j6v9l" Jul 2 09:13:05.943292 kubelet[2492]: I0702 09:13:05.942457 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-lib-modules\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943406 kubelet[2492]: I0702 09:13:05.942478 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-cni-bin-dir\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943406 kubelet[2492]: I0702 09:13:05.942504 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbf2\" (UniqueName: \"kubernetes.io/projected/b947b2cd-e586-4b30-a3dc-e38d1bd209db-kube-api-access-4tbf2\") pod \"calico-typha-5bc58bdc74-8zpv6\" (UID: \"b947b2cd-e586-4b30-a3dc-e38d1bd209db\") " pod="calico-system/calico-typha-5bc58bdc74-8zpv6" Jul 2 09:13:05.943406 kubelet[2492]: I0702 09:13:05.942525 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs2br\" (UniqueName: \"kubernetes.io/projected/e9d46a84-7c18-4852-a2fc-721796054886-kube-api-access-fs2br\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943406 kubelet[2492]: I0702 09:13:05.942544 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0e5ec1b3-c9b9-48d9-b154-86cd38677ba2-varrun\") pod \"csi-node-driver-j6v9l\" (UID: \"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2\") " pod="calico-system/csi-node-driver-j6v9l" Jul 2 09:13:05.943406 kubelet[2492]: I0702 09:13:05.942565 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-var-run-calico\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943608 kubelet[2492]: I0702 09:13:05.942585 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e5ec1b3-c9b9-48d9-b154-86cd38677ba2-registration-dir\") pod \"csi-node-driver-j6v9l\" (UID: \"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2\") " pod="calico-system/csi-node-driver-j6v9l" Jul 2 09:13:05.943608 kubelet[2492]: I0702 09:13:05.942607 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b947b2cd-e586-4b30-a3dc-e38d1bd209db-tigera-ca-bundle\") pod \"calico-typha-5bc58bdc74-8zpv6\" (UID: \"b947b2cd-e586-4b30-a3dc-e38d1bd209db\") " pod="calico-system/calico-typha-5bc58bdc74-8zpv6" Jul 2 09:13:05.943608 kubelet[2492]: I0702 09:13:05.942638 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b947b2cd-e586-4b30-a3dc-e38d1bd209db-typha-certs\") pod \"calico-typha-5bc58bdc74-8zpv6\" (UID: \"b947b2cd-e586-4b30-a3dc-e38d1bd209db\") " pod="calico-system/calico-typha-5bc58bdc74-8zpv6" Jul 2 09:13:05.943608 kubelet[2492]: I0702 09:13:05.942660 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9d46a84-7c18-4852-a2fc-721796054886-node-certs\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:05.943608 kubelet[2492]: I0702 09:13:05.942679 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e5ec1b3-c9b9-48d9-b154-86cd38677ba2-kubelet-dir\") pod \"csi-node-driver-j6v9l\" (UID: \"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2\") " pod="calico-system/csi-node-driver-j6v9l" Jul 2 09:13:05.943712 kubelet[2492]: I0702 09:13:05.942774 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9d46a84-7c18-4852-a2fc-721796054886-xtables-lock\") pod \"calico-node-bt6hv\" (UID: \"e9d46a84-7c18-4852-a2fc-721796054886\") " pod="calico-system/calico-node-bt6hv" Jul 2 09:13:06.044510 kubelet[2492]: I0702 09:13:06.044392 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7bp\" (UniqueName: \"kubernetes.io/projected/0e5ec1b3-c9b9-48d9-b154-86cd38677ba2-kube-api-access-gb7bp\") pod \"csi-node-driver-j6v9l\" (UID: \"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2\") " pod="calico-system/csi-node-driver-j6v9l" Jul 2 09:13:06.063457 kubelet[2492]: E0702 09:13:06.063281 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.063457 kubelet[2492]: W0702 09:13:06.063317 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.063457 kubelet[2492]: E0702 09:13:06.063345 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.070103 kubelet[2492]: E0702 09:13:06.069808 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.070103 kubelet[2492]: W0702 09:13:06.069832 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.070103 kubelet[2492]: E0702 09:13:06.069858 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.070774 kubelet[2492]: E0702 09:13:06.070162 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.070774 kubelet[2492]: W0702 09:13:06.070173 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.070774 kubelet[2492]: E0702 09:13:06.070189 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.071687 kubelet[2492]: E0702 09:13:06.071659 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.071818 kubelet[2492]: W0702 09:13:06.071679 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.071818 kubelet[2492]: E0702 09:13:06.071783 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.099315 kubelet[2492]: E0702 09:13:06.099245 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:06.100580 containerd[1440]: time="2024-07-02T09:13:06.100449013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc58bdc74-8zpv6,Uid:b947b2cd-e586-4b30-a3dc-e38d1bd209db,Namespace:calico-system,Attempt:0,}" Jul 2 09:13:06.134648 kubelet[2492]: E0702 09:13:06.134610 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:06.135571 containerd[1440]: time="2024-07-02T09:13:06.135305207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bt6hv,Uid:e9d46a84-7c18-4852-a2fc-721796054886,Namespace:calico-system,Attempt:0,}" Jul 2 09:13:06.145494 kubelet[2492]: E0702 09:13:06.145385 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.145494 kubelet[2492]: W0702 09:13:06.145407 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.145494 kubelet[2492]: E0702 09:13:06.145430 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.147544 kubelet[2492]: E0702 09:13:06.145699 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.147544 kubelet[2492]: W0702 09:13:06.145712 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.147544 kubelet[2492]: E0702 09:13:06.145733 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.147544 kubelet[2492]: E0702 09:13:06.146393 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.147544 kubelet[2492]: W0702 09:13:06.146409 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.147544 kubelet[2492]: E0702 09:13:06.146426 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.147544 kubelet[2492]: E0702 09:13:06.146695 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.147544 kubelet[2492]: W0702 09:13:06.146706 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.147544 kubelet[2492]: E0702 09:13:06.146719 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.147544 kubelet[2492]: E0702 09:13:06.146940 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.147840 kubelet[2492]: W0702 09:13:06.146949 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.147840 kubelet[2492]: E0702 09:13:06.146961 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.164909 kubelet[2492]: E0702 09:13:06.164881 2492 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:13:06.164909 kubelet[2492]: W0702 09:13:06.164899 2492 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:13:06.165100 kubelet[2492]: E0702 09:13:06.164920 2492 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:13:06.191876 containerd[1440]: time="2024-07-02T09:13:06.190421510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:13:06.191876 containerd[1440]: time="2024-07-02T09:13:06.190496830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:06.191876 containerd[1440]: time="2024-07-02T09:13:06.190519630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:13:06.191876 containerd[1440]: time="2024-07-02T09:13:06.190533750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:06.200594 containerd[1440]: time="2024-07-02T09:13:06.200467886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:13:06.200594 containerd[1440]: time="2024-07-02T09:13:06.200524765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:06.200594 containerd[1440]: time="2024-07-02T09:13:06.200547645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:13:06.200594 containerd[1440]: time="2024-07-02T09:13:06.200558365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:06.231257 systemd[1]: Started cri-containerd-85e2fe4f987f53ce144b521b2fcc1c160c03609cf934692f720ac2aecafc269c.scope - libcontainer container 85e2fe4f987f53ce144b521b2fcc1c160c03609cf934692f720ac2aecafc269c. Jul 2 09:13:06.232826 systemd[1]: Started cri-containerd-fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348.scope - libcontainer container fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348. Jul 2 09:13:06.261525 containerd[1440]: time="2024-07-02T09:13:06.261480575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bt6hv,Uid:e9d46a84-7c18-4852-a2fc-721796054886,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348\"" Jul 2 09:13:06.267305 kubelet[2492]: E0702 09:13:06.267278 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:06.269689 containerd[1440]: time="2024-07-02T09:13:06.269450115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 09:13:06.279139 containerd[1440]: time="2024-07-02T09:13:06.278905372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc58bdc74-8zpv6,Uid:b947b2cd-e586-4b30-a3dc-e38d1bd209db,Namespace:calico-system,Attempt:0,} returns sandbox id \"85e2fe4f987f53ce144b521b2fcc1c160c03609cf934692f720ac2aecafc269c\"" Jul 2 09:13:06.279971 kubelet[2492]: E0702 09:13:06.279500 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:07.350649 containerd[1440]: time="2024-07-02T09:13:07.350584774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:07.352216 containerd[1440]: time="2024-07-02T09:13:07.351807931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 09:13:07.353554 containerd[1440]: time="2024-07-02T09:13:07.353273967Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:07.357188 containerd[1440]: time="2024-07-02T09:13:07.357150998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:07.359392 containerd[1440]: time="2024-07-02T09:13:07.358543475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.08905232s" Jul 2 09:13:07.359392 containerd[1440]: time="2024-07-02T09:13:07.358583115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 09:13:07.359539 containerd[1440]: time="2024-07-02T09:13:07.359419273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 09:13:07.361262 containerd[1440]: time="2024-07-02T09:13:07.361230189Z" level=info msg="CreateContainer within sandbox \"fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 09:13:07.373599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652658697.mount: Deactivated successfully. Jul 2 09:13:07.375551 containerd[1440]: time="2024-07-02T09:13:07.375507756Z" level=info msg="CreateContainer within sandbox \"fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4\"" Jul 2 09:13:07.376246 containerd[1440]: time="2024-07-02T09:13:07.376222354Z" level=info msg="StartContainer for \"cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4\"" Jul 2 09:13:07.404259 systemd[1]: Started cri-containerd-cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4.scope - libcontainer container cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4. Jul 2 09:13:07.432786 containerd[1440]: time="2024-07-02T09:13:07.432735423Z" level=info msg="StartContainer for \"cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4\" returns successfully" Jul 2 09:13:07.476731 systemd[1]: cri-containerd-cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4.scope: Deactivated successfully. Jul 2 09:13:07.521116 kubelet[2492]: E0702 09:13:07.520762 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j6v9l" podUID="0e5ec1b3-c9b9-48d9-b154-86cd38677ba2" Jul 2 09:13:07.524884 containerd[1440]: time="2024-07-02T09:13:07.524669450Z" level=info msg="shim disconnected" id=cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4 namespace=k8s.io Jul 2 09:13:07.524884 containerd[1440]: time="2024-07-02T09:13:07.524727530Z" level=warning msg="cleaning up after shim disconnected" id=cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4 namespace=k8s.io Jul 2 09:13:07.524884 containerd[1440]: time="2024-07-02T09:13:07.524736090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:13:07.570853 kubelet[2492]: E0702 09:13:07.570793 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:08.060534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfd2972d813f034ff3b26b2817b10a74a3a2be2a8fcea231e00552cb135c1dc4-rootfs.mount: Deactivated successfully. Jul 2 09:13:08.587225 containerd[1440]: time="2024-07-02T09:13:08.587164230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:08.588144 containerd[1440]: time="2024-07-02T09:13:08.587900108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 09:13:08.588857 containerd[1440]: time="2024-07-02T09:13:08.588815666Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:08.590899 containerd[1440]: time="2024-07-02T09:13:08.590863182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:08.591691 containerd[1440]: time="2024-07-02T09:13:08.591650260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.232200947s" Jul 2 09:13:08.591757 containerd[1440]: time="2024-07-02T09:13:08.591700580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 09:13:08.592472 containerd[1440]: time="2024-07-02T09:13:08.592227419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 09:13:08.599254 containerd[1440]: time="2024-07-02T09:13:08.599208964Z" level=info msg="CreateContainer within sandbox \"85e2fe4f987f53ce144b521b2fcc1c160c03609cf934692f720ac2aecafc269c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 09:13:08.616782 containerd[1440]: time="2024-07-02T09:13:08.616716406Z" level=info msg="CreateContainer within sandbox \"85e2fe4f987f53ce144b521b2fcc1c160c03609cf934692f720ac2aecafc269c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9256dcf225c79d4f111204c14be57fc7b05bae7211328b164326868f7cb65e49\"" Jul 2 09:13:08.617261 containerd[1440]: time="2024-07-02T09:13:08.617204125Z" level=info msg="StartContainer for \"9256dcf225c79d4f111204c14be57fc7b05bae7211328b164326868f7cb65e49\"" Jul 2 09:13:08.649247 systemd[1]: Started cri-containerd-9256dcf225c79d4f111204c14be57fc7b05bae7211328b164326868f7cb65e49.scope - libcontainer container 9256dcf225c79d4f111204c14be57fc7b05bae7211328b164326868f7cb65e49. Jul 2 09:13:08.685143 containerd[1440]: time="2024-07-02T09:13:08.685080577Z" level=info msg="StartContainer for \"9256dcf225c79d4f111204c14be57fc7b05bae7211328b164326868f7cb65e49\" returns successfully" Jul 2 09:13:09.520254 kubelet[2492]: E0702 09:13:09.520206 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j6v9l" podUID="0e5ec1b3-c9b9-48d9-b154-86cd38677ba2" Jul 2 09:13:09.576758 kubelet[2492]: E0702 09:13:09.576705 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:09.598612 kubelet[2492]: I0702 09:13:09.598204 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5bc58bdc74-8zpv6" podStartSLOduration=2.286395461 podCreationTimestamp="2024-07-02 09:13:05 +0000 UTC" firstStartedPulling="2024-07-02 09:13:06.280232008 +0000 UTC m=+19.856613513" lastFinishedPulling="2024-07-02 09:13:08.591995419 +0000 UTC m=+22.168376924" observedRunningTime="2024-07-02 09:13:09.595871037 +0000 UTC m=+23.172252582" watchObservedRunningTime="2024-07-02 09:13:09.598158872 +0000 UTC m=+23.174540377" Jul 2 09:13:10.578029 kubelet[2492]: I0702 09:13:10.577997 2492 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:13:10.578677 kubelet[2492]: E0702 09:13:10.578660 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:11.520920 kubelet[2492]: E0702 09:13:11.520882 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j6v9l" podUID="0e5ec1b3-c9b9-48d9-b154-86cd38677ba2" Jul 2 09:13:13.521087 kubelet[2492]: E0702 09:13:13.520949 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j6v9l" podUID="0e5ec1b3-c9b9-48d9-b154-86cd38677ba2" Jul 2 09:13:13.772618 containerd[1440]: time="2024-07-02T09:13:13.772510812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:13.774648 containerd[1440]: time="2024-07-02T09:13:13.774508329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 09:13:13.775509 containerd[1440]: time="2024-07-02T09:13:13.775480727Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:13.779790 containerd[1440]: time="2024-07-02T09:13:13.778656842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:13.779790 containerd[1440]: time="2024-07-02T09:13:13.779412961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 5.187152502s" Jul 2 09:13:13.779790 containerd[1440]: time="2024-07-02T09:13:13.779438721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 09:13:13.782310 containerd[1440]: time="2024-07-02T09:13:13.782263637Z" level=info msg="CreateContainer within sandbox \"fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 09:13:13.802694 containerd[1440]: time="2024-07-02T09:13:13.802551205Z" level=info msg="CreateContainer within sandbox \"fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79\"" Jul 2 09:13:13.804668 containerd[1440]: time="2024-07-02T09:13:13.804538882Z" level=info msg="StartContainer for \"84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79\"" Jul 2 09:13:13.843474 systemd[1]: Started cri-containerd-84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79.scope - libcontainer container 84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79. Jul 2 09:13:13.966173 containerd[1440]: time="2024-07-02T09:13:13.966023987Z" level=info msg="StartContainer for \"84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79\" returns successfully" Jul 2 09:13:14.345307 containerd[1440]: time="2024-07-02T09:13:14.345200704Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:13:14.347163 systemd[1]: cri-containerd-84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79.scope: Deactivated successfully. Jul 2 09:13:14.372878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79-rootfs.mount: Deactivated successfully. Jul 2 09:13:14.377602 containerd[1440]: time="2024-07-02T09:13:14.377530896Z" level=info msg="shim disconnected" id=84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79 namespace=k8s.io Jul 2 09:13:14.377602 containerd[1440]: time="2024-07-02T09:13:14.377599216Z" level=warning msg="cleaning up after shim disconnected" id=84cf049ac8ca6d21c91bd2ab815d37c58c4532d1ceea7ab4b3b170c088b11c79 namespace=k8s.io Jul 2 09:13:14.377602 containerd[1440]: time="2024-07-02T09:13:14.377607736Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:13:14.420852 kubelet[2492]: I0702 09:13:14.420812 2492 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 09:13:14.439991 kubelet[2492]: I0702 09:13:14.439846 2492 topology_manager.go:215] "Topology Admit Handler" podUID="ff42ceab-fd69-4ad7-97c0-e2c3789feaaa" podNamespace="kube-system" podName="coredns-5dd5756b68-wfv5c" Jul 2 09:13:14.442354 kubelet[2492]: I0702 09:13:14.442311 2492 topology_manager.go:215] "Topology Admit Handler" podUID="8f7f87f1-7ae0-46a9-9d20-04f3918c9a21" podNamespace="kube-system" podName="coredns-5dd5756b68-h9f8q" Jul 2 09:13:14.447678 kubelet[2492]: I0702 09:13:14.445590 2492 topology_manager.go:215] "Topology Admit Handler" podUID="83949e65-f15c-410c-b54a-eacd7cfb0118" podNamespace="calico-system" podName="calico-kube-controllers-78f46b9b5c-nkqlg" Jul 2 09:13:14.457510 systemd[1]: Created slice kubepods-burstable-podff42ceab_fd69_4ad7_97c0_e2c3789feaaa.slice - libcontainer container kubepods-burstable-podff42ceab_fd69_4ad7_97c0_e2c3789feaaa.slice. Jul 2 09:13:14.465821 systemd[1]: Created slice kubepods-burstable-pod8f7f87f1_7ae0_46a9_9d20_04f3918c9a21.slice - libcontainer container kubepods-burstable-pod8f7f87f1_7ae0_46a9_9d20_04f3918c9a21.slice. Jul 2 09:13:14.475727 systemd[1]: Created slice kubepods-besteffort-pod83949e65_f15c_410c_b54a_eacd7cfb0118.slice - libcontainer container kubepods-besteffort-pod83949e65_f15c_410c_b54a_eacd7cfb0118.slice. Jul 2 09:13:14.592207 kubelet[2492]: E0702 09:13:14.592170 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:14.593755 containerd[1440]: time="2024-07-02T09:13:14.593718497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 09:13:14.611011 kubelet[2492]: I0702 09:13:14.610899 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff42ceab-fd69-4ad7-97c0-e2c3789feaaa-config-volume\") pod \"coredns-5dd5756b68-wfv5c\" (UID: \"ff42ceab-fd69-4ad7-97c0-e2c3789feaaa\") " pod="kube-system/coredns-5dd5756b68-wfv5c" Jul 2 09:13:14.611011 kubelet[2492]: I0702 09:13:14.610994 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m4hs\" (UniqueName: \"kubernetes.io/projected/83949e65-f15c-410c-b54a-eacd7cfb0118-kube-api-access-9m4hs\") pod \"calico-kube-controllers-78f46b9b5c-nkqlg\" (UID: \"83949e65-f15c-410c-b54a-eacd7cfb0118\") " pod="calico-system/calico-kube-controllers-78f46b9b5c-nkqlg" Jul 2 09:13:14.611147 kubelet[2492]: I0702 09:13:14.611067 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f7f87f1-7ae0-46a9-9d20-04f3918c9a21-config-volume\") pod \"coredns-5dd5756b68-h9f8q\" (UID: \"8f7f87f1-7ae0-46a9-9d20-04f3918c9a21\") " pod="kube-system/coredns-5dd5756b68-h9f8q" Jul 2 09:13:14.611147 kubelet[2492]: I0702 09:13:14.611115 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83949e65-f15c-410c-b54a-eacd7cfb0118-tigera-ca-bundle\") pod \"calico-kube-controllers-78f46b9b5c-nkqlg\" (UID: \"83949e65-f15c-410c-b54a-eacd7cfb0118\") " pod="calico-system/calico-kube-controllers-78f46b9b5c-nkqlg" Jul 2 09:13:14.611147 kubelet[2492]: I0702 09:13:14.611143 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhptt\" (UniqueName: \"kubernetes.io/projected/8f7f87f1-7ae0-46a9-9d20-04f3918c9a21-kube-api-access-vhptt\") pod \"coredns-5dd5756b68-h9f8q\" (UID: \"8f7f87f1-7ae0-46a9-9d20-04f3918c9a21\") " pod="kube-system/coredns-5dd5756b68-h9f8q" Jul 2 09:13:14.611220 kubelet[2492]: I0702 09:13:14.611178 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sx5g\" (UniqueName: \"kubernetes.io/projected/ff42ceab-fd69-4ad7-97c0-e2c3789feaaa-kube-api-access-6sx5g\") pod \"coredns-5dd5756b68-wfv5c\" (UID: \"ff42ceab-fd69-4ad7-97c0-e2c3789feaaa\") " pod="kube-system/coredns-5dd5756b68-wfv5c" Jul 2 09:13:14.762250 kubelet[2492]: E0702 09:13:14.762154 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:14.762910 containerd[1440]: time="2024-07-02T09:13:14.762873007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wfv5c,Uid:ff42ceab-fd69-4ad7-97c0-e2c3789feaaa,Namespace:kube-system,Attempt:0,}" Jul 2 09:13:14.770145 kubelet[2492]: E0702 09:13:14.769833 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:14.770408 containerd[1440]: time="2024-07-02T09:13:14.770366836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h9f8q,Uid:8f7f87f1-7ae0-46a9-9d20-04f3918c9a21,Namespace:kube-system,Attempt:0,}" Jul 2 09:13:14.779859 containerd[1440]: time="2024-07-02T09:13:14.779590182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f46b9b5c-nkqlg,Uid:83949e65-f15c-410c-b54a-eacd7cfb0118,Namespace:calico-system,Attempt:0,}" Jul 2 09:13:15.062975 containerd[1440]: time="2024-07-02T09:13:15.062916770Z" level=error msg="Failed to destroy network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.064352 containerd[1440]: time="2024-07-02T09:13:15.064310688Z" level=error msg="encountered an error cleaning up failed sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.064885 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f-shm.mount: Deactivated successfully. Jul 2 09:13:15.065747 containerd[1440]: time="2024-07-02T09:13:15.065704606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wfv5c,Uid:ff42ceab-fd69-4ad7-97c0-e2c3789feaaa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.065999 containerd[1440]: time="2024-07-02T09:13:15.065816206Z" level=error msg="Failed to destroy network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.066428 containerd[1440]: time="2024-07-02T09:13:15.066394005Z" level=error msg="encountered an error cleaning up failed sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.066471 containerd[1440]: time="2024-07-02T09:13:15.066453885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h9f8q,Uid:8f7f87f1-7ae0-46a9-9d20-04f3918c9a21,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.066959 kubelet[2492]: E0702 09:13:15.066594 2492 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.066959 kubelet[2492]: E0702 09:13:15.066664 2492 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-h9f8q" Jul 2 09:13:15.066959 kubelet[2492]: E0702 09:13:15.066684 2492 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-h9f8q" Jul 2 09:13:15.067206 kubelet[2492]: E0702 09:13:15.066737 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-h9f8q_kube-system(8f7f87f1-7ae0-46a9-9d20-04f3918c9a21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-h9f8q_kube-system(8f7f87f1-7ae0-46a9-9d20-04f3918c9a21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-h9f8q" podUID="8f7f87f1-7ae0-46a9-9d20-04f3918c9a21" Jul 2 09:13:15.067206 kubelet[2492]: E0702 09:13:15.067107 2492 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.067206 kubelet[2492]: E0702 09:13:15.067184 2492 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-wfv5c" Jul 2 09:13:15.067324 kubelet[2492]: E0702 09:13:15.067204 2492 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-wfv5c" Jul 2 09:13:15.067324 kubelet[2492]: E0702 09:13:15.067280 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-wfv5c_kube-system(ff42ceab-fd69-4ad7-97c0-e2c3789feaaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-wfv5c_kube-system(ff42ceab-fd69-4ad7-97c0-e2c3789feaaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-wfv5c" podUID="ff42ceab-fd69-4ad7-97c0-e2c3789feaaa" Jul 2 09:13:15.068190 containerd[1440]: time="2024-07-02T09:13:15.068154363Z" level=error msg="Failed to destroy network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.068579 containerd[1440]: time="2024-07-02T09:13:15.068545362Z" level=error msg="encountered an error cleaning up failed sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.068696 containerd[1440]: time="2024-07-02T09:13:15.068671082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f46b9b5c-nkqlg,Uid:83949e65-f15c-410c-b54a-eacd7cfb0118,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.069025 kubelet[2492]: E0702 09:13:15.069007 2492 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.069207 kubelet[2492]: E0702 09:13:15.069165 2492 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f46b9b5c-nkqlg" Jul 2 09:13:15.069207 kubelet[2492]: E0702 09:13:15.069190 2492 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f46b9b5c-nkqlg" Jul 2 09:13:15.069355 kubelet[2492]: E0702 09:13:15.069309 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f46b9b5c-nkqlg_calico-system(83949e65-f15c-410c-b54a-eacd7cfb0118)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f46b9b5c-nkqlg_calico-system(83949e65-f15c-410c-b54a-eacd7cfb0118)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f46b9b5c-nkqlg" podUID="83949e65-f15c-410c-b54a-eacd7cfb0118" Jul 2 09:13:15.526948 systemd[1]: Created slice kubepods-besteffort-pod0e5ec1b3_c9b9_48d9_b154_86cd38677ba2.slice - libcontainer container kubepods-besteffort-pod0e5ec1b3_c9b9_48d9_b154_86cd38677ba2.slice. Jul 2 09:13:15.537480 containerd[1440]: time="2024-07-02T09:13:15.537439233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j6v9l,Uid:0e5ec1b3-c9b9-48d9-b154-86cd38677ba2,Namespace:calico-system,Attempt:0,}" Jul 2 09:13:15.593966 kubelet[2492]: I0702 09:13:15.593935 2492 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:15.595342 containerd[1440]: time="2024-07-02T09:13:15.594892433Z" level=info msg="StopPodSandbox for \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\"" Jul 2 09:13:15.595342 containerd[1440]: time="2024-07-02T09:13:15.595106113Z" level=info msg="Ensure that sandbox 72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee in task-service has been cleanup successfully" Jul 2 09:13:15.596786 kubelet[2492]: I0702 09:13:15.596746 2492 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:15.597295 containerd[1440]: time="2024-07-02T09:13:15.597258670Z" level=info msg="StopPodSandbox for \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\"" Jul 2 09:13:15.597735 containerd[1440]: time="2024-07-02T09:13:15.597688749Z" level=info msg="Ensure that sandbox 1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33 in task-service has been cleanup successfully" Jul 2 09:13:15.602504 kubelet[2492]: I0702 09:13:15.602467 2492 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:15.605004 containerd[1440]: time="2024-07-02T09:13:15.604952979Z" level=info msg="StopPodSandbox for \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\"" Jul 2 09:13:15.606357 containerd[1440]: time="2024-07-02T09:13:15.606219018Z" level=info msg="Ensure that sandbox ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f in task-service has been cleanup successfully" Jul 2 09:13:15.634496 containerd[1440]: time="2024-07-02T09:13:15.634440699Z" level=error msg="StopPodSandbox for \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\" failed" error="failed to destroy network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.634765 kubelet[2492]: E0702 09:13:15.634734 2492 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:15.634852 kubelet[2492]: E0702 09:13:15.634809 2492 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee"} Jul 2 09:13:15.634879 kubelet[2492]: E0702 09:13:15.634862 2492 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83949e65-f15c-410c-b54a-eacd7cfb0118\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:13:15.634936 kubelet[2492]: E0702 09:13:15.634889 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83949e65-f15c-410c-b54a-eacd7cfb0118\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f46b9b5c-nkqlg" podUID="83949e65-f15c-410c-b54a-eacd7cfb0118" Jul 2 09:13:15.647417 containerd[1440]: time="2024-07-02T09:13:15.647354321Z" level=error msg="StopPodSandbox for \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\" failed" error="failed to destroy network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.647699 kubelet[2492]: E0702 09:13:15.647669 2492 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:15.647753 kubelet[2492]: E0702 09:13:15.647723 2492 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33"} Jul 2 09:13:15.647778 kubelet[2492]: E0702 09:13:15.647758 2492 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f7f87f1-7ae0-46a9-9d20-04f3918c9a21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:13:15.648086 kubelet[2492]: E0702 09:13:15.647785 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f7f87f1-7ae0-46a9-9d20-04f3918c9a21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-h9f8q" podUID="8f7f87f1-7ae0-46a9-9d20-04f3918c9a21" Jul 2 09:13:15.651157 containerd[1440]: time="2024-07-02T09:13:15.651111636Z" level=error msg="StopPodSandbox for \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\" failed" error="failed to destroy network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.652383 kubelet[2492]: E0702 09:13:15.652348 2492 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:15.652468 kubelet[2492]: E0702 09:13:15.652398 2492 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f"} Jul 2 09:13:15.652468 kubelet[2492]: E0702 09:13:15.652431 2492 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff42ceab-fd69-4ad7-97c0-e2c3789feaaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:13:15.652468 kubelet[2492]: E0702 09:13:15.652457 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff42ceab-fd69-4ad7-97c0-e2c3789feaaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-wfv5c" podUID="ff42ceab-fd69-4ad7-97c0-e2c3789feaaa" Jul 2 09:13:15.671137 containerd[1440]: time="2024-07-02T09:13:15.671086168Z" level=error msg="Failed to destroy network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.671621 containerd[1440]: time="2024-07-02T09:13:15.671445327Z" level=error msg="encountered an error cleaning up failed sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.671621 containerd[1440]: time="2024-07-02T09:13:15.671495807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j6v9l,Uid:0e5ec1b3-c9b9-48d9-b154-86cd38677ba2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.671917 kubelet[2492]: E0702 09:13:15.671879 2492 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:15.671993 kubelet[2492]: E0702 09:13:15.671944 2492 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j6v9l" Jul 2 09:13:15.671993 kubelet[2492]: E0702 09:13:15.671966 2492 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j6v9l" Jul 2 09:13:15.672091 kubelet[2492]: E0702 09:13:15.672024 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j6v9l_calico-system(0e5ec1b3-c9b9-48d9-b154-86cd38677ba2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j6v9l_calico-system(0e5ec1b3-c9b9-48d9-b154-86cd38677ba2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j6v9l" podUID="0e5ec1b3-c9b9-48d9-b154-86cd38677ba2" Jul 2 09:13:15.797841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33-shm.mount: Deactivated successfully. Jul 2 09:13:15.797929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee-shm.mount: Deactivated successfully. Jul 2 09:13:16.606292 kubelet[2492]: I0702 09:13:16.606255 2492 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:16.607095 containerd[1440]: time="2024-07-02T09:13:16.606952005Z" level=info msg="StopPodSandbox for \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\"" Jul 2 09:13:16.607361 containerd[1440]: time="2024-07-02T09:13:16.607191004Z" level=info msg="Ensure that sandbox b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e in task-service has been cleanup successfully" Jul 2 09:13:16.645778 containerd[1440]: time="2024-07-02T09:13:16.645725074Z" level=error msg="StopPodSandbox for \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\" failed" error="failed to destroy network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:13:16.645999 kubelet[2492]: E0702 09:13:16.645967 2492 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:16.646056 kubelet[2492]: E0702 09:13:16.646019 2492 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e"} Jul 2 09:13:16.646086 kubelet[2492]: E0702 09:13:16.646072 2492 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:13:16.646147 kubelet[2492]: E0702 09:13:16.646117 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j6v9l" podUID="0e5ec1b3-c9b9-48d9-b154-86cd38677ba2" Jul 2 09:13:17.238629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918378288.mount: Deactivated successfully. Jul 2 09:13:17.488589 containerd[1440]: time="2024-07-02T09:13:17.487832101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:17.488589 containerd[1440]: time="2024-07-02T09:13:17.488404900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 09:13:17.489494 containerd[1440]: time="2024-07-02T09:13:17.489407099Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:17.491660 containerd[1440]: time="2024-07-02T09:13:17.491615336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:17.492476 containerd[1440]: time="2024-07-02T09:13:17.492444535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 2.898678598s" Jul 2 09:13:17.492560 containerd[1440]: time="2024-07-02T09:13:17.492480855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 09:13:17.499250 containerd[1440]: time="2024-07-02T09:13:17.499205367Z" level=info msg="CreateContainer within sandbox \"fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 09:13:17.535535 containerd[1440]: time="2024-07-02T09:13:17.535412083Z" level=info msg="CreateContainer within sandbox \"fb2f13ae952629b045e506cbd731c0f338e17e5195a7f30b77a7df9cb0eb7348\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c0e9b6cf6ba46b8c974bfeb4abbf3e17f7f868b6d60ff7e7740d485a48f87b9e\"" Jul 2 09:13:17.536440 containerd[1440]: time="2024-07-02T09:13:17.536370042Z" level=info msg="StartContainer for \"c0e9b6cf6ba46b8c974bfeb4abbf3e17f7f868b6d60ff7e7740d485a48f87b9e\"" Jul 2 09:13:17.587213 systemd[1]: Started cri-containerd-c0e9b6cf6ba46b8c974bfeb4abbf3e17f7f868b6d60ff7e7740d485a48f87b9e.scope - libcontainer container c0e9b6cf6ba46b8c974bfeb4abbf3e17f7f868b6d60ff7e7740d485a48f87b9e. Jul 2 09:13:17.613109 containerd[1440]: time="2024-07-02T09:13:17.613016309Z" level=info msg="StartContainer for \"c0e9b6cf6ba46b8c974bfeb4abbf3e17f7f868b6d60ff7e7740d485a48f87b9e\" returns successfully" Jul 2 09:13:17.775710 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 09:13:17.775890 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 09:13:18.619018 kubelet[2492]: E0702 09:13:18.618912 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:18.632536 kubelet[2492]: I0702 09:13:18.632316 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-bt6hv" podStartSLOduration=2.408283019 podCreationTimestamp="2024-07-02 09:13:05 +0000 UTC" firstStartedPulling="2024-07-02 09:13:06.268712397 +0000 UTC m=+19.845093902" lastFinishedPulling="2024-07-02 09:13:17.492704135 +0000 UTC m=+31.069085640" observedRunningTime="2024-07-02 09:13:18.632189197 +0000 UTC m=+32.208570702" watchObservedRunningTime="2024-07-02 09:13:18.632274757 +0000 UTC m=+32.208656262" Jul 2 09:13:19.620627 kubelet[2492]: E0702 09:13:19.619432 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:21.238343 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:34168.service - OpenSSH per-connection server daemon (10.0.0.1:34168). Jul 2 09:13:21.278326 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 34168 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:21.278987 sshd[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:21.285986 systemd-logind[1419]: New session 8 of user core. Jul 2 09:13:21.289226 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 09:13:21.424138 sshd[3686]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:21.426795 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:34168.service: Deactivated successfully. Jul 2 09:13:21.429356 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Jul 2 09:13:21.429474 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 09:13:21.431697 systemd-logind[1419]: Removed session 8. Jul 2 09:13:24.939411 kubelet[2492]: I0702 09:13:24.939206 2492 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:13:24.940224 kubelet[2492]: E0702 09:13:24.939890 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:25.474726 systemd-networkd[1375]: vxlan.calico: Link UP Jul 2 09:13:25.474848 systemd-networkd[1375]: vxlan.calico: Gained carrier Jul 2 09:13:25.632878 kubelet[2492]: E0702 09:13:25.632841 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:26.456319 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:34182.service - OpenSSH per-connection server daemon (10.0.0.1:34182). Jul 2 09:13:26.499874 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 34182 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:26.501340 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:26.505332 systemd-logind[1419]: New session 9 of user core. Jul 2 09:13:26.516201 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 09:13:26.635805 sshd[3924]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:26.639204 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:34182.service: Deactivated successfully. Jul 2 09:13:26.642686 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 09:13:26.643774 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Jul 2 09:13:26.644543 systemd-logind[1419]: Removed session 9. Jul 2 09:13:27.233283 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Jul 2 09:13:27.521387 containerd[1440]: time="2024-07-02T09:13:27.521281344Z" level=info msg="StopPodSandbox for \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\"" Jul 2 09:13:27.521387 containerd[1440]: time="2024-07-02T09:13:27.521310504Z" level=info msg="StopPodSandbox for \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\"" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.600 [INFO][3973] k8s.go 608: Cleaning up netns ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.600 [INFO][3973] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" iface="eth0" netns="/var/run/netns/cni-a63e8dad-3d72-0fe3-658c-fa2b43165041" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.601 [INFO][3973] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" iface="eth0" netns="/var/run/netns/cni-a63e8dad-3d72-0fe3-658c-fa2b43165041" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.602 [INFO][3973] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" iface="eth0" netns="/var/run/netns/cni-a63e8dad-3d72-0fe3-658c-fa2b43165041" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.602 [INFO][3973] k8s.go 615: Releasing IP address(es) ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.602 [INFO][3973] utils.go 188: Calico CNI releasing IP address ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.710 [INFO][3987] ipam_plugin.go 411: Releasing address using handleID ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.711 [INFO][3987] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.711 [INFO][3987] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.724 [WARNING][3987] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.724 [INFO][3987] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.725 [INFO][3987] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:27.729153 containerd[1440]: 2024-07-02 09:13:27.727 [INFO][3973] k8s.go 621: Teardown processing complete. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:27.731213 systemd[1]: run-netns-cni\x2da63e8dad\x2d3d72\x2d0fe3\x2d658c\x2dfa2b43165041.mount: Deactivated successfully. Jul 2 09:13:27.732366 containerd[1440]: time="2024-07-02T09:13:27.732325089Z" level=info msg="TearDown network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\" successfully" Jul 2 09:13:27.732366 containerd[1440]: time="2024-07-02T09:13:27.732359329Z" level=info msg="StopPodSandbox for \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\" returns successfully" Jul 2 09:13:27.733456 containerd[1440]: time="2024-07-02T09:13:27.733063209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f46b9b5c-nkqlg,Uid:83949e65-f15c-410c-b54a-eacd7cfb0118,Namespace:calico-system,Attempt:1,}" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.603 [INFO][3972] k8s.go 608: Cleaning up netns ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.603 [INFO][3972] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" iface="eth0" netns="/var/run/netns/cni-33c206ee-aada-6c89-bf48-2dc3aa659bdd" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.603 [INFO][3972] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" iface="eth0" netns="/var/run/netns/cni-33c206ee-aada-6c89-bf48-2dc3aa659bdd" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.603 [INFO][3972] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" iface="eth0" netns="/var/run/netns/cni-33c206ee-aada-6c89-bf48-2dc3aa659bdd" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.603 [INFO][3972] k8s.go 615: Releasing IP address(es) ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.604 [INFO][3972] utils.go 188: Calico CNI releasing IP address ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.710 [INFO][3988] ipam_plugin.go 411: Releasing address using handleID ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.711 [INFO][3988] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.725 [INFO][3988] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.734 [WARNING][3988] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.734 [INFO][3988] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.735 [INFO][3988] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:27.738521 containerd[1440]: 2024-07-02 09:13:27.736 [INFO][3972] k8s.go 621: Teardown processing complete. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:27.739583 containerd[1440]: time="2024-07-02T09:13:27.739235445Z" level=info msg="TearDown network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\" successfully" Jul 2 09:13:27.739583 containerd[1440]: time="2024-07-02T09:13:27.739262245Z" level=info msg="StopPodSandbox for \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\" returns successfully" Jul 2 09:13:27.740531 containerd[1440]: time="2024-07-02T09:13:27.739704364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j6v9l,Uid:0e5ec1b3-c9b9-48d9-b154-86cd38677ba2,Namespace:calico-system,Attempt:1,}" Jul 2 09:13:27.741959 systemd[1]: run-netns-cni\x2d33c206ee\x2daada\x2d6c89\x2dbf48\x2d2dc3aa659bdd.mount: Deactivated successfully. Jul 2 09:13:27.866126 systemd-networkd[1375]: cali8148568124f: Link UP Jul 2 09:13:27.866493 systemd-networkd[1375]: cali8148568124f: Gained carrier Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.786 [INFO][4004] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--j6v9l-eth0 csi-node-driver- calico-system 0e5ec1b3-c9b9-48d9-b154-86cd38677ba2 793 0 2024-07-02 09:13:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-j6v9l eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8148568124f [] []}} ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Namespace="calico-system" Pod="csi-node-driver-j6v9l" WorkloadEndpoint="localhost-k8s-csi--node--driver--j6v9l-" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.786 [INFO][4004] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Namespace="calico-system" Pod="csi-node-driver-j6v9l" WorkloadEndpoint="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.811 [INFO][4031] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" HandleID="k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.824 [INFO][4031] ipam_plugin.go 264: Auto assigning IP ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" HandleID="k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372d30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-j6v9l", "timestamp":"2024-07-02 09:13:27.811576799 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.824 [INFO][4031] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.824 [INFO][4031] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.824 [INFO][4031] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.825 [INFO][4031] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.838 [INFO][4031] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.847 [INFO][4031] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.849 [INFO][4031] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.852 [INFO][4031] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.852 [INFO][4031] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.853 [INFO][4031] ipam.go 1685: Creating new handle: k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.856 [INFO][4031] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.860 [INFO][4031] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.860 [INFO][4031] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" host="localhost" Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.860 [INFO][4031] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:27.888252 containerd[1440]: 2024-07-02 09:13:27.860 [INFO][4031] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" HandleID="k8s-pod-network.4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.888857 containerd[1440]: 2024-07-02 09:13:27.863 [INFO][4004] k8s.go 386: Populated endpoint ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Namespace="calico-system" Pod="csi-node-driver-j6v9l" WorkloadEndpoint="localhost-k8s-csi--node--driver--j6v9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j6v9l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-j6v9l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8148568124f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:27.888857 containerd[1440]: 2024-07-02 09:13:27.863 [INFO][4004] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Namespace="calico-system" Pod="csi-node-driver-j6v9l" WorkloadEndpoint="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.888857 containerd[1440]: 2024-07-02 09:13:27.863 [INFO][4004] dataplane_linux.go 68: Setting the host side veth name to cali8148568124f ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Namespace="calico-system" Pod="csi-node-driver-j6v9l" WorkloadEndpoint="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.888857 containerd[1440]: 2024-07-02 09:13:27.866 [INFO][4004] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Namespace="calico-system" Pod="csi-node-driver-j6v9l" WorkloadEndpoint="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.888857 containerd[1440]: 2024-07-02 09:13:27.867 [INFO][4004] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Namespace="calico-system" Pod="csi-node-driver-j6v9l" WorkloadEndpoint="localhost-k8s-csi--node--driver--j6v9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j6v9l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a", Pod:"csi-node-driver-j6v9l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8148568124f", MAC:"b6:f9:32:dc:2e:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:27.888857 containerd[1440]: 2024-07-02 09:13:27.881 [INFO][4004] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a" Namespace="calico-system" Pod="csi-node-driver-j6v9l" WorkloadEndpoint="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:27.913917 systemd-networkd[1375]: cali47a8c3dba01: Link UP Jul 2 09:13:27.914129 systemd-networkd[1375]: cali47a8c3dba01: Gained carrier Jul 2 09:13:27.931110 containerd[1440]: time="2024-07-02T09:13:27.931025242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:13:27.931292 containerd[1440]: time="2024-07-02T09:13:27.931100362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.786 [INFO][4018] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0 calico-kube-controllers-78f46b9b5c- calico-system 83949e65-f15c-410c-b54a-eacd7cfb0118 792 0 2024-07-02 09:13:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f46b9b5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78f46b9b5c-nkqlg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali47a8c3dba01 [] []}} ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Namespace="calico-system" Pod="calico-kube-controllers-78f46b9b5c-nkqlg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.786 [INFO][4018] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Namespace="calico-system" Pod="calico-kube-controllers-78f46b9b5c-nkqlg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.814 [INFO][4030] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" HandleID="k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.832 [INFO][4030] ipam_plugin.go 264: Auto assigning IP ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" HandleID="k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000320330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78f46b9b5c-nkqlg", "timestamp":"2024-07-02 09:13:27.814338397 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.832 [INFO][4030] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.860 [INFO][4030] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.860 [INFO][4030] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.862 [INFO][4030] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.868 [INFO][4030] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.872 [INFO][4030] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.880 [INFO][4030] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.888 [INFO][4030] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.888 [INFO][4030] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.892 [INFO][4030] ipam.go 1685: Creating new handle: k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87 Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.898 [INFO][4030] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.905 [INFO][4030] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.905 [INFO][4030] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" host="localhost" Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.905 [INFO][4030] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:27.931348 containerd[1440]: 2024-07-02 09:13:27.905 [INFO][4030] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" HandleID="k8s-pod-network.906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.931831 containerd[1440]: 2024-07-02 09:13:27.910 [INFO][4018] k8s.go 386: Populated endpoint ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Namespace="calico-system" Pod="calico-kube-controllers-78f46b9b5c-nkqlg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0", GenerateName:"calico-kube-controllers-78f46b9b5c-", Namespace:"calico-system", SelfLink:"", UID:"83949e65-f15c-410c-b54a-eacd7cfb0118", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f46b9b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78f46b9b5c-nkqlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47a8c3dba01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:27.931831 containerd[1440]: 2024-07-02 09:13:27.910 [INFO][4018] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Namespace="calico-system" Pod="calico-kube-controllers-78f46b9b5c-nkqlg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.931831 containerd[1440]: 2024-07-02 09:13:27.911 [INFO][4018] dataplane_linux.go 68: Setting the host side veth name to cali47a8c3dba01 ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Namespace="calico-system" Pod="calico-kube-controllers-78f46b9b5c-nkqlg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.931831 containerd[1440]: 2024-07-02 09:13:27.914 [INFO][4018] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Namespace="calico-system" Pod="calico-kube-controllers-78f46b9b5c-nkqlg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.931831 containerd[1440]: 2024-07-02 09:13:27.915 [INFO][4018] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Namespace="calico-system" Pod="calico-kube-controllers-78f46b9b5c-nkqlg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0", GenerateName:"calico-kube-controllers-78f46b9b5c-", Namespace:"calico-system", SelfLink:"", UID:"83949e65-f15c-410c-b54a-eacd7cfb0118", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f46b9b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87", Pod:"calico-kube-controllers-78f46b9b5c-nkqlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47a8c3dba01", MAC:"b6:b4:a7:90:b7:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:27.931831 containerd[1440]: 2024-07-02 09:13:27.926 [INFO][4018] k8s.go 500: Wrote updated endpoint to datastore ContainerID="906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87" Namespace="calico-system" Pod="calico-kube-controllers-78f46b9b5c-nkqlg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:27.932158 containerd[1440]: time="2024-07-02T09:13:27.931708842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:13:27.932158 containerd[1440]: time="2024-07-02T09:13:27.931807882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:27.959964 containerd[1440]: time="2024-07-02T09:13:27.959659664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:13:27.959964 containerd[1440]: time="2024-07-02T09:13:27.959784424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:27.959964 containerd[1440]: time="2024-07-02T09:13:27.959836624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:13:27.959964 containerd[1440]: time="2024-07-02T09:13:27.959853864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:27.961582 systemd[1]: Started cri-containerd-4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a.scope - libcontainer container 4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a. Jul 2 09:13:27.974929 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:13:27.982200 systemd[1]: Started cri-containerd-906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87.scope - libcontainer container 906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87. Jul 2 09:13:27.990932 containerd[1440]: time="2024-07-02T09:13:27.990900444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j6v9l,Uid:0e5ec1b3-c9b9-48d9-b154-86cd38677ba2,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a\"" Jul 2 09:13:27.992841 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:13:27.994171 containerd[1440]: time="2024-07-02T09:13:27.994144802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 09:13:28.010908 containerd[1440]: time="2024-07-02T09:13:28.010876192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f46b9b5c-nkqlg,Uid:83949e65-f15c-410c-b54a-eacd7cfb0118,Namespace:calico-system,Attempt:1,} returns sandbox id \"906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87\"" Jul 2 09:13:28.520648 containerd[1440]: time="2024-07-02T09:13:28.520612607Z" level=info msg="StopPodSandbox for \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\"" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.562 [INFO][4168] k8s.go 608: Cleaning up netns ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.562 [INFO][4168] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" iface="eth0" netns="/var/run/netns/cni-6c18a6fe-52e6-a39a-fa7c-81913482c904" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.563 [INFO][4168] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" iface="eth0" netns="/var/run/netns/cni-6c18a6fe-52e6-a39a-fa7c-81913482c904" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.563 [INFO][4168] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" iface="eth0" netns="/var/run/netns/cni-6c18a6fe-52e6-a39a-fa7c-81913482c904" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.563 [INFO][4168] k8s.go 615: Releasing IP address(es) ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.563 [INFO][4168] utils.go 188: Calico CNI releasing IP address ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.581 [INFO][4176] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.581 [INFO][4176] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.581 [INFO][4176] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.588 [WARNING][4176] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.588 [INFO][4176] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.590 [INFO][4176] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:28.593334 containerd[1440]: 2024-07-02 09:13:28.591 [INFO][4168] k8s.go 621: Teardown processing complete. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:28.593878 containerd[1440]: time="2024-07-02T09:13:28.593463323Z" level=info msg="TearDown network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\" successfully" Jul 2 09:13:28.593878 containerd[1440]: time="2024-07-02T09:13:28.593490323Z" level=info msg="StopPodSandbox for \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\" returns successfully" Jul 2 09:13:28.593927 kubelet[2492]: E0702 09:13:28.593762 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:28.595134 containerd[1440]: time="2024-07-02T09:13:28.594226483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h9f8q,Uid:8f7f87f1-7ae0-46a9-9d20-04f3918c9a21,Namespace:kube-system,Attempt:1,}" Jul 2 09:13:28.696649 systemd-networkd[1375]: calicd5c15519a6: Link UP Jul 2 09:13:28.696797 systemd-networkd[1375]: calicd5c15519a6: Gained carrier Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.632 [INFO][4185] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--h9f8q-eth0 coredns-5dd5756b68- kube-system 8f7f87f1-7ae0-46a9-9d20-04f3918c9a21 807 0 2024-07-02 09:13:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-h9f8q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicd5c15519a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Namespace="kube-system" Pod="coredns-5dd5756b68-h9f8q" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--h9f8q-" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.633 [INFO][4185] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Namespace="kube-system" Pod="coredns-5dd5756b68-h9f8q" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.657 [INFO][4198] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" HandleID="k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.667 [INFO][4198] ipam_plugin.go 264: Auto assigning IP ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" HandleID="k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d290), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-h9f8q", "timestamp":"2024-07-02 09:13:28.657866045 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.668 [INFO][4198] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.668 [INFO][4198] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.668 [INFO][4198] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.669 [INFO][4198] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.673 [INFO][4198] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.676 [INFO][4198] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.678 [INFO][4198] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.680 [INFO][4198] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.680 [INFO][4198] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.681 [INFO][4198] ipam.go 1685: Creating new handle: k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.684 [INFO][4198] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.688 [INFO][4198] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.688 [INFO][4198] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" host="localhost" Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.689 [INFO][4198] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:28.711957 containerd[1440]: 2024-07-02 09:13:28.689 [INFO][4198] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" HandleID="k8s-pod-network.76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.712554 containerd[1440]: 2024-07-02 09:13:28.692 [INFO][4185] k8s.go 386: Populated endpoint ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Namespace="kube-system" Pod="coredns-5dd5756b68-h9f8q" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--h9f8q-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8f7f87f1-7ae0-46a9-9d20-04f3918c9a21", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-h9f8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd5c15519a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:28.712554 containerd[1440]: 2024-07-02 09:13:28.693 [INFO][4185] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Namespace="kube-system" Pod="coredns-5dd5756b68-h9f8q" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.712554 containerd[1440]: 2024-07-02 09:13:28.693 [INFO][4185] dataplane_linux.go 68: Setting the host side veth name to calicd5c15519a6 ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Namespace="kube-system" Pod="coredns-5dd5756b68-h9f8q" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.712554 containerd[1440]: 2024-07-02 09:13:28.695 [INFO][4185] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Namespace="kube-system" Pod="coredns-5dd5756b68-h9f8q" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.712554 containerd[1440]: 2024-07-02 09:13:28.699 [INFO][4185] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Namespace="kube-system" Pod="coredns-5dd5756b68-h9f8q" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--h9f8q-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8f7f87f1-7ae0-46a9-9d20-04f3918c9a21", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc", Pod:"coredns-5dd5756b68-h9f8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd5c15519a6", MAC:"32:32:e5:7c:cc:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:28.712554 containerd[1440]: 2024-07-02 09:13:28.707 [INFO][4185] k8s.go 500: Wrote updated endpoint to datastore ContainerID="76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc" Namespace="kube-system" Pod="coredns-5dd5756b68-h9f8q" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:28.733803 systemd[1]: run-netns-cni\x2d6c18a6fe\x2d52e6\x2da39a\x2dfa7c\x2d81913482c904.mount: Deactivated successfully. Jul 2 09:13:28.747590 containerd[1440]: time="2024-07-02T09:13:28.747427111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:13:28.747590 containerd[1440]: time="2024-07-02T09:13:28.747476071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:28.747590 containerd[1440]: time="2024-07-02T09:13:28.747505631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:13:28.747590 containerd[1440]: time="2024-07-02T09:13:28.747515791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:28.765180 systemd[1]: Started cri-containerd-76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc.scope - libcontainer container 76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc. Jul 2 09:13:28.777635 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:13:28.798153 containerd[1440]: time="2024-07-02T09:13:28.798116441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h9f8q,Uid:8f7f87f1-7ae0-46a9-9d20-04f3918c9a21,Namespace:kube-system,Attempt:1,} returns sandbox id \"76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc\"" Jul 2 09:13:28.799171 kubelet[2492]: E0702 09:13:28.799148 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:28.800847 containerd[1440]: time="2024-07-02T09:13:28.800728279Z" level=info msg="CreateContainer within sandbox \"76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:13:28.815776 containerd[1440]: time="2024-07-02T09:13:28.815686270Z" level=info msg="CreateContainer within sandbox \"76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"189d34ab6bd9fcfdaaf7d236c37bd681ed3578c32bcbb044408d53acc8c6f43f\"" Jul 2 09:13:28.816126 containerd[1440]: time="2024-07-02T09:13:28.816091470Z" level=info msg="StartContainer for \"189d34ab6bd9fcfdaaf7d236c37bd681ed3578c32bcbb044408d53acc8c6f43f\"" Jul 2 09:13:28.847434 systemd[1]: Started cri-containerd-189d34ab6bd9fcfdaaf7d236c37bd681ed3578c32bcbb044408d53acc8c6f43f.scope - libcontainer container 189d34ab6bd9fcfdaaf7d236c37bd681ed3578c32bcbb044408d53acc8c6f43f. Jul 2 09:13:28.860763 containerd[1440]: time="2024-07-02T09:13:28.860216724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:28.862459 containerd[1440]: time="2024-07-02T09:13:28.862424282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 09:13:28.863547 containerd[1440]: time="2024-07-02T09:13:28.863511842Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:28.866437 containerd[1440]: time="2024-07-02T09:13:28.866407760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:28.867381 containerd[1440]: time="2024-07-02T09:13:28.867354399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 873.172437ms" Jul 2 09:13:28.867490 containerd[1440]: time="2024-07-02T09:13:28.867474439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 09:13:28.869536 containerd[1440]: time="2024-07-02T09:13:28.869502278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 09:13:28.870592 containerd[1440]: time="2024-07-02T09:13:28.870474718Z" level=info msg="CreateContainer within sandbox \"4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 09:13:28.878188 containerd[1440]: time="2024-07-02T09:13:28.878072593Z" level=info msg="StartContainer for \"189d34ab6bd9fcfdaaf7d236c37bd681ed3578c32bcbb044408d53acc8c6f43f\" returns successfully" Jul 2 09:13:28.888946 containerd[1440]: time="2024-07-02T09:13:28.888847947Z" level=info msg="CreateContainer within sandbox \"4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"def006bb2f536e72cbbb7a41acd127d55ecb795d542b524b98835067b95ba52a\"" Jul 2 09:13:28.890162 containerd[1440]: time="2024-07-02T09:13:28.890136346Z" level=info msg="StartContainer for \"def006bb2f536e72cbbb7a41acd127d55ecb795d542b524b98835067b95ba52a\"" Jul 2 09:13:28.916215 systemd[1]: Started cri-containerd-def006bb2f536e72cbbb7a41acd127d55ecb795d542b524b98835067b95ba52a.scope - libcontainer container def006bb2f536e72cbbb7a41acd127d55ecb795d542b524b98835067b95ba52a. Jul 2 09:13:28.967281 containerd[1440]: time="2024-07-02T09:13:28.967236300Z" level=info msg="StartContainer for \"def006bb2f536e72cbbb7a41acd127d55ecb795d542b524b98835067b95ba52a\" returns successfully" Jul 2 09:13:29.648706 kubelet[2492]: E0702 09:13:29.648661 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:29.663331 kubelet[2492]: I0702 09:13:29.663278 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h9f8q" podStartSLOduration=29.663240428 podCreationTimestamp="2024-07-02 09:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:13:29.662658908 +0000 UTC m=+43.239040413" watchObservedRunningTime="2024-07-02 09:13:29.663240428 +0000 UTC m=+43.239621893" Jul 2 09:13:29.793569 systemd-networkd[1375]: cali8148568124f: Gained IPv6LL Jul 2 09:13:29.793829 systemd-networkd[1375]: cali47a8c3dba01: Gained IPv6LL Jul 2 09:13:29.922241 systemd-networkd[1375]: calicd5c15519a6: Gained IPv6LL Jul 2 09:13:30.001845 containerd[1440]: time="2024-07-02T09:13:30.001782318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:30.002616 containerd[1440]: time="2024-07-02T09:13:30.002571878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 09:13:30.003268 containerd[1440]: time="2024-07-02T09:13:30.003235837Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:30.005997 containerd[1440]: time="2024-07-02T09:13:30.005956716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.136417278s" Jul 2 09:13:30.006043 containerd[1440]: time="2024-07-02T09:13:30.005996916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 09:13:30.008783 containerd[1440]: time="2024-07-02T09:13:30.007146995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 09:13:30.013280 containerd[1440]: time="2024-07-02T09:13:30.013246192Z" level=info msg="CreateContainer within sandbox \"906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 09:13:30.028913 containerd[1440]: time="2024-07-02T09:13:30.028864384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:30.033025 containerd[1440]: time="2024-07-02T09:13:30.032973342Z" level=info msg="CreateContainer within sandbox \"906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5aa36c475bf97bfd70cb1c23cb9ee3362466e9d32ec95e6ca5338fcdd9890af3\"" Jul 2 09:13:30.033956 containerd[1440]: time="2024-07-02T09:13:30.033921621Z" level=info msg="StartContainer for \"5aa36c475bf97bfd70cb1c23cb9ee3362466e9d32ec95e6ca5338fcdd9890af3\"" Jul 2 09:13:30.059183 systemd[1]: Started cri-containerd-5aa36c475bf97bfd70cb1c23cb9ee3362466e9d32ec95e6ca5338fcdd9890af3.scope - libcontainer container 5aa36c475bf97bfd70cb1c23cb9ee3362466e9d32ec95e6ca5338fcdd9890af3. Jul 2 09:13:30.089745 containerd[1440]: time="2024-07-02T09:13:30.089696472Z" level=info msg="StartContainer for \"5aa36c475bf97bfd70cb1c23cb9ee3362466e9d32ec95e6ca5338fcdd9890af3\" returns successfully" Jul 2 09:13:30.674142 kubelet[2492]: E0702 09:13:30.674106 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:30.734842 kubelet[2492]: I0702 09:13:30.733814 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78f46b9b5c-nkqlg" podStartSLOduration=23.739113968 podCreationTimestamp="2024-07-02 09:13:05 +0000 UTC" firstStartedPulling="2024-07-02 09:13:28.011726631 +0000 UTC m=+41.588108136" lastFinishedPulling="2024-07-02 09:13:30.006389516 +0000 UTC m=+43.582770981" observedRunningTime="2024-07-02 09:13:30.684788359 +0000 UTC m=+44.261169864" watchObservedRunningTime="2024-07-02 09:13:30.733776813 +0000 UTC m=+44.310158278" Jul 2 09:13:30.955431 containerd[1440]: time="2024-07-02T09:13:30.955314657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:30.955973 containerd[1440]: time="2024-07-02T09:13:30.955935337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 09:13:30.956864 containerd[1440]: time="2024-07-02T09:13:30.956807936Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:30.958899 containerd[1440]: time="2024-07-02T09:13:30.958872215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:13:30.959790 containerd[1440]: time="2024-07-02T09:13:30.959550495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 952.35294ms" Jul 2 09:13:30.959790 containerd[1440]: time="2024-07-02T09:13:30.959597895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 09:13:30.961235 containerd[1440]: time="2024-07-02T09:13:30.961203614Z" level=info msg="CreateContainer within sandbox \"4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 09:13:30.971944 containerd[1440]: time="2024-07-02T09:13:30.971915608Z" level=info msg="CreateContainer within sandbox \"4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c32be62714cc6df6063cf7ce0e13893f18591e2f988a63d558dbacf8c56a267b\"" Jul 2 09:13:30.972490 containerd[1440]: time="2024-07-02T09:13:30.972456848Z" level=info msg="StartContainer for \"c32be62714cc6df6063cf7ce0e13893f18591e2f988a63d558dbacf8c56a267b\"" Jul 2 09:13:31.003290 systemd[1]: Started cri-containerd-c32be62714cc6df6063cf7ce0e13893f18591e2f988a63d558dbacf8c56a267b.scope - libcontainer container c32be62714cc6df6063cf7ce0e13893f18591e2f988a63d558dbacf8c56a267b. Jul 2 09:13:31.026745 containerd[1440]: time="2024-07-02T09:13:31.026646420Z" level=info msg="StartContainer for \"c32be62714cc6df6063cf7ce0e13893f18591e2f988a63d558dbacf8c56a267b\" returns successfully" Jul 2 09:13:31.521424 containerd[1440]: time="2024-07-02T09:13:31.521356576Z" level=info msg="StopPodSandbox for \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\"" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.562 [INFO][4473] k8s.go 608: Cleaning up netns ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.562 [INFO][4473] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" iface="eth0" netns="/var/run/netns/cni-a8517364-6f20-e485-c5b8-6a9c7ca80a5f" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.563 [INFO][4473] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" iface="eth0" netns="/var/run/netns/cni-a8517364-6f20-e485-c5b8-6a9c7ca80a5f" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.563 [INFO][4473] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" iface="eth0" netns="/var/run/netns/cni-a8517364-6f20-e485-c5b8-6a9c7ca80a5f" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.563 [INFO][4473] k8s.go 615: Releasing IP address(es) ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.563 [INFO][4473] utils.go 188: Calico CNI releasing IP address ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.584 [INFO][4481] ipam_plugin.go 411: Releasing address using handleID ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.584 [INFO][4481] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.584 [INFO][4481] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.591 [WARNING][4481] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.591 [INFO][4481] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.593 [INFO][4481] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:31.596532 containerd[1440]: 2024-07-02 09:13:31.595 [INFO][4473] k8s.go 621: Teardown processing complete. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:31.597290 containerd[1440]: time="2024-07-02T09:13:31.596826019Z" level=info msg="TearDown network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\" successfully" Jul 2 09:13:31.597290 containerd[1440]: time="2024-07-02T09:13:31.596863699Z" level=info msg="StopPodSandbox for \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\" returns successfully" Jul 2 09:13:31.597342 kubelet[2492]: E0702 09:13:31.597099 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:31.597912 containerd[1440]: time="2024-07-02T09:13:31.597886779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wfv5c,Uid:ff42ceab-fd69-4ad7-97c0-e2c3789feaaa,Namespace:kube-system,Attempt:1,}" Jul 2 09:13:31.619350 kubelet[2492]: I0702 09:13:31.619321 2492 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 09:13:31.619350 kubelet[2492]: I0702 09:13:31.619362 2492 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 09:13:31.649634 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:42172.service - OpenSSH per-connection server daemon (10.0.0.1:42172). Jul 2 09:13:31.680744 kubelet[2492]: E0702 09:13:31.679477 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:31.693982 kubelet[2492]: I0702 09:13:31.693712 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-j6v9l" podStartSLOduration=23.72569176 podCreationTimestamp="2024-07-02 09:13:05 +0000 UTC" firstStartedPulling="2024-07-02 09:13:27.991863004 +0000 UTC m=+41.568244509" lastFinishedPulling="2024-07-02 09:13:30.959804655 +0000 UTC m=+44.536186160" observedRunningTime="2024-07-02 09:13:31.693428372 +0000 UTC m=+45.269809877" watchObservedRunningTime="2024-07-02 09:13:31.693633411 +0000 UTC m=+45.270014916" Jul 2 09:13:31.727666 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 42172 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:31.730128 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:31.734842 systemd-networkd[1375]: cali90903a6d195: Link UP Jul 2 09:13:31.735208 systemd-networkd[1375]: cali90903a6d195: Gained carrier Jul 2 09:13:31.735686 systemd[1]: run-netns-cni\x2da8517364\x2d6f20\x2de485\x2dc5b8\x2d6a9c7ca80a5f.mount: Deactivated successfully. Jul 2 09:13:31.742835 systemd-logind[1419]: New session 10 of user core. Jul 2 09:13:31.748221 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.664 [INFO][4490] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--wfv5c-eth0 coredns-5dd5756b68- kube-system ff42ceab-fd69-4ad7-97c0-e2c3789feaaa 858 0 2024-07-02 09:13:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-wfv5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90903a6d195 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Namespace="kube-system" Pod="coredns-5dd5756b68-wfv5c" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wfv5c-" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.665 [INFO][4490] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Namespace="kube-system" Pod="coredns-5dd5756b68-wfv5c" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.695 [INFO][4506] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" HandleID="k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.708 [INFO][4506] ipam_plugin.go 264: Auto assigning IP ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" HandleID="k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058fdb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-wfv5c", "timestamp":"2024-07-02 09:13:31.69579137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.708 [INFO][4506] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.708 [INFO][4506] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.708 [INFO][4506] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.710 [INFO][4506] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.713 [INFO][4506] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.717 [INFO][4506] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.719 [INFO][4506] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.720 [INFO][4506] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.720 [INFO][4506] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.722 [INFO][4506] ipam.go 1685: Creating new handle: k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431 Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.724 [INFO][4506] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.729 [INFO][4506] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.729 [INFO][4506] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" host="localhost" Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.729 [INFO][4506] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:31.753888 containerd[1440]: 2024-07-02 09:13:31.729 [INFO][4506] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" HandleID="k8s-pod-network.8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.754556 containerd[1440]: 2024-07-02 09:13:31.732 [INFO][4490] k8s.go 386: Populated endpoint ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Namespace="kube-system" Pod="coredns-5dd5756b68-wfv5c" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wfv5c-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ff42ceab-fd69-4ad7-97c0-e2c3789feaaa", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-wfv5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90903a6d195", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:31.754556 containerd[1440]: 2024-07-02 09:13:31.733 [INFO][4490] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Namespace="kube-system" Pod="coredns-5dd5756b68-wfv5c" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.754556 containerd[1440]: 2024-07-02 09:13:31.733 [INFO][4490] dataplane_linux.go 68: Setting the host side veth name to cali90903a6d195 ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Namespace="kube-system" Pod="coredns-5dd5756b68-wfv5c" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.754556 containerd[1440]: 2024-07-02 09:13:31.735 [INFO][4490] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Namespace="kube-system" Pod="coredns-5dd5756b68-wfv5c" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.754556 containerd[1440]: 2024-07-02 09:13:31.737 [INFO][4490] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Namespace="kube-system" Pod="coredns-5dd5756b68-wfv5c" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wfv5c-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ff42ceab-fd69-4ad7-97c0-e2c3789feaaa", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431", Pod:"coredns-5dd5756b68-wfv5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90903a6d195", MAC:"c6:9b:02:65:de:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:31.754556 containerd[1440]: 2024-07-02 09:13:31.744 [INFO][4490] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431" Namespace="kube-system" Pod="coredns-5dd5756b68-wfv5c" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:31.772571 containerd[1440]: time="2024-07-02T09:13:31.772387373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:13:31.774918 containerd[1440]: time="2024-07-02T09:13:31.772543893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:31.774918 containerd[1440]: time="2024-07-02T09:13:31.772636533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:13:31.774918 containerd[1440]: time="2024-07-02T09:13:31.772654973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:13:31.799377 systemd[1]: Started cri-containerd-8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431.scope - libcontainer container 8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431. Jul 2 09:13:31.811672 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:13:31.831072 containerd[1440]: time="2024-07-02T09:13:31.830228744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wfv5c,Uid:ff42ceab-fd69-4ad7-97c0-e2c3789feaaa,Namespace:kube-system,Attempt:1,} returns sandbox id \"8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431\"" Jul 2 09:13:31.832492 kubelet[2492]: E0702 09:13:31.832473 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:31.834595 containerd[1440]: time="2024-07-02T09:13:31.834549622Z" level=info msg="CreateContainer within sandbox \"8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:13:31.855900 containerd[1440]: time="2024-07-02T09:13:31.855859612Z" level=info msg="CreateContainer within sandbox \"8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e197946cb9f04e3ccee695eabc9918fa1654928b9ae43d5fe06534d2dc40e77b\"" Jul 2 09:13:31.857097 containerd[1440]: time="2024-07-02T09:13:31.857068771Z" level=info msg="StartContainer for \"e197946cb9f04e3ccee695eabc9918fa1654928b9ae43d5fe06534d2dc40e77b\"" Jul 2 09:13:31.885684 systemd[1]: Started cri-containerd-e197946cb9f04e3ccee695eabc9918fa1654928b9ae43d5fe06534d2dc40e77b.scope - libcontainer container e197946cb9f04e3ccee695eabc9918fa1654928b9ae43d5fe06534d2dc40e77b. Jul 2 09:13:31.894150 sshd[4505]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:31.898309 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:42172.service: Deactivated successfully. Jul 2 09:13:31.902503 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 09:13:31.905086 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Jul 2 09:13:31.908085 systemd-logind[1419]: Removed session 10. Jul 2 09:13:31.913302 containerd[1440]: time="2024-07-02T09:13:31.913211863Z" level=info msg="StartContainer for \"e197946cb9f04e3ccee695eabc9918fa1654928b9ae43d5fe06534d2dc40e77b\" returns successfully" Jul 2 09:13:32.683119 kubelet[2492]: E0702 09:13:32.683080 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:32.692311 kubelet[2492]: I0702 09:13:32.692279 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wfv5c" podStartSLOduration=32.692241981 podCreationTimestamp="2024-07-02 09:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:13:32.691730701 +0000 UTC m=+46.268112206" watchObservedRunningTime="2024-07-02 09:13:32.692241981 +0000 UTC m=+46.268623566" Jul 2 09:13:33.505197 systemd-networkd[1375]: cali90903a6d195: Gained IPv6LL Jul 2 09:13:33.683259 kubelet[2492]: E0702 09:13:33.683195 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:34.685445 kubelet[2492]: E0702 09:13:34.685361 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:35.686681 kubelet[2492]: E0702 09:13:35.686642 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:36.904951 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:42176.service - OpenSSH per-connection server daemon (10.0.0.1:42176). Jul 2 09:13:36.943101 sshd[4640]: Accepted publickey for core from 10.0.0.1 port 42176 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:36.944481 sshd[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:36.950939 systemd-logind[1419]: New session 11 of user core. Jul 2 09:13:36.957262 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 09:13:37.069441 sshd[4640]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:37.073087 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:42176.service: Deactivated successfully. Jul 2 09:13:37.075106 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 09:13:37.075759 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Jul 2 09:13:37.076545 systemd-logind[1419]: Removed session 11. Jul 2 09:13:42.079631 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:49786.service - OpenSSH per-connection server daemon (10.0.0.1:49786). Jul 2 09:13:42.127481 sshd[4659]: Accepted publickey for core from 10.0.0.1 port 49786 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:42.128501 sshd[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:42.134957 systemd-logind[1419]: New session 12 of user core. Jul 2 09:13:42.145685 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 09:13:42.266868 sshd[4659]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:42.270774 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:49786.service: Deactivated successfully. Jul 2 09:13:42.274507 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 09:13:42.275412 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Jul 2 09:13:42.276908 systemd-logind[1419]: Removed session 12. Jul 2 09:13:45.944447 kubelet[2492]: E0702 09:13:45.944371 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:13:46.522480 containerd[1440]: time="2024-07-02T09:13:46.522364964Z" level=info msg="StopPodSandbox for \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\"" Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.558 [WARNING][4720] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0", GenerateName:"calico-kube-controllers-78f46b9b5c-", Namespace:"calico-system", SelfLink:"", UID:"83949e65-f15c-410c-b54a-eacd7cfb0118", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f46b9b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87", Pod:"calico-kube-controllers-78f46b9b5c-nkqlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47a8c3dba01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.558 [INFO][4720] k8s.go 608: Cleaning up netns ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.559 [INFO][4720] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" iface="eth0" netns="" Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.560 [INFO][4720] k8s.go 615: Releasing IP address(es) ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.560 [INFO][4720] utils.go 188: Calico CNI releasing IP address ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.581 [INFO][4728] ipam_plugin.go 411: Releasing address using handleID ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.581 [INFO][4728] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.581 [INFO][4728] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.590 [WARNING][4728] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.590 [INFO][4728] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.591 [INFO][4728] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:46.594576 containerd[1440]: 2024-07-02 09:13:46.593 [INFO][4720] k8s.go 621: Teardown processing complete. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:46.594576 containerd[1440]: time="2024-07-02T09:13:46.594453310Z" level=info msg="TearDown network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\" successfully" Jul 2 09:13:46.594576 containerd[1440]: time="2024-07-02T09:13:46.594478350Z" level=info msg="StopPodSandbox for \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\" returns successfully" Jul 2 09:13:46.596339 containerd[1440]: time="2024-07-02T09:13:46.595146950Z" level=info msg="RemovePodSandbox for \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\"" Jul 2 09:13:46.604342 containerd[1440]: time="2024-07-02T09:13:46.595181230Z" level=info msg="Forcibly stopping sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\"" Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.636 [WARNING][4755] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0", GenerateName:"calico-kube-controllers-78f46b9b5c-", Namespace:"calico-system", SelfLink:"", UID:"83949e65-f15c-410c-b54a-eacd7cfb0118", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f46b9b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"906343d12e42e38a6aca3d92dcc651e95a524d4ab95aa9dc977f2aa0002e3a87", Pod:"calico-kube-controllers-78f46b9b5c-nkqlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47a8c3dba01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.637 [INFO][4755] k8s.go 608: Cleaning up netns ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.637 [INFO][4755] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" iface="eth0" netns="" Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.637 [INFO][4755] k8s.go 615: Releasing IP address(es) ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.637 [INFO][4755] utils.go 188: Calico CNI releasing IP address ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.655 [INFO][4763] ipam_plugin.go 411: Releasing address using handleID ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.655 [INFO][4763] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.655 [INFO][4763] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.663 [WARNING][4763] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.663 [INFO][4763] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" HandleID="k8s-pod-network.72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Workload="localhost-k8s-calico--kube--controllers--78f46b9b5c--nkqlg-eth0" Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.664 [INFO][4763] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:46.669784 containerd[1440]: 2024-07-02 09:13:46.668 [INFO][4755] k8s.go 621: Teardown processing complete. ContainerID="72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee" Jul 2 09:13:46.670540 containerd[1440]: time="2024-07-02T09:13:46.670259256Z" level=info msg="TearDown network for sandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\" successfully" Jul 2 09:13:46.685077 containerd[1440]: time="2024-07-02T09:13:46.684997613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:13:46.685392 containerd[1440]: time="2024-07-02T09:13:46.685254733Z" level=info msg="RemovePodSandbox \"72bdabb21c53afb41259b97bd2820d90f5f31f14d4d40d5f178d1343848411ee\" returns successfully" Jul 2 09:13:46.685753 containerd[1440]: time="2024-07-02T09:13:46.685731373Z" level=info msg="StopPodSandbox for \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\"" Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.717 [WARNING][4785] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--h9f8q-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8f7f87f1-7ae0-46a9-9d20-04f3918c9a21", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc", Pod:"coredns-5dd5756b68-h9f8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd5c15519a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.718 [INFO][4785] k8s.go 608: Cleaning up netns ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.718 [INFO][4785] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" iface="eth0" netns="" Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.718 [INFO][4785] k8s.go 615: Releasing IP address(es) ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.718 [INFO][4785] utils.go 188: Calico CNI releasing IP address ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.734 [INFO][4793] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.735 [INFO][4793] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.735 [INFO][4793] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.743 [WARNING][4793] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.743 [INFO][4793] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.745 [INFO][4793] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:46.750279 containerd[1440]: 2024-07-02 09:13:46.747 [INFO][4785] k8s.go 621: Teardown processing complete. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:46.750640 containerd[1440]: time="2024-07-02T09:13:46.750305721Z" level=info msg="TearDown network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\" successfully" Jul 2 09:13:46.750640 containerd[1440]: time="2024-07-02T09:13:46.750329641Z" level=info msg="StopPodSandbox for \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\" returns successfully" Jul 2 09:13:46.751125 containerd[1440]: time="2024-07-02T09:13:46.750972241Z" level=info msg="RemovePodSandbox for \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\"" Jul 2 09:13:46.751125 containerd[1440]: time="2024-07-02T09:13:46.751004641Z" level=info msg="Forcibly stopping sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\"" Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.781 [WARNING][4817] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--h9f8q-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8f7f87f1-7ae0-46a9-9d20-04f3918c9a21", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76eba00662dd097fc8b268de43dbcd21c2185b980133e1d472b4d1c7800ac1dc", Pod:"coredns-5dd5756b68-h9f8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd5c15519a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.782 [INFO][4817] k8s.go 608: Cleaning up netns ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.782 [INFO][4817] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" iface="eth0" netns="" Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.782 [INFO][4817] k8s.go 615: Releasing IP address(es) ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.782 [INFO][4817] utils.go 188: Calico CNI releasing IP address ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.801 [INFO][4824] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.801 [INFO][4824] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.801 [INFO][4824] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.809 [WARNING][4824] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.809 [INFO][4824] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" HandleID="k8s-pod-network.1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Workload="localhost-k8s-coredns--5dd5756b68--h9f8q-eth0" Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.810 [INFO][4824] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:46.813742 containerd[1440]: 2024-07-02 09:13:46.812 [INFO][4817] k8s.go 621: Teardown processing complete. ContainerID="1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33" Jul 2 09:13:46.814130 containerd[1440]: time="2024-07-02T09:13:46.813748589Z" level=info msg="TearDown network for sandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\" successfully" Jul 2 09:13:46.816608 containerd[1440]: time="2024-07-02T09:13:46.816571309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:13:46.816658 containerd[1440]: time="2024-07-02T09:13:46.816633669Z" level=info msg="RemovePodSandbox \"1a10ebf8948aae69c506fb72a3f26eb7f78ed02f6d0269159c5c1ea5481fcf33\" returns successfully" Jul 2 09:13:46.817351 containerd[1440]: time="2024-07-02T09:13:46.817000669Z" level=info msg="StopPodSandbox for \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\"" Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.848 [WARNING][4847] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wfv5c-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ff42ceab-fd69-4ad7-97c0-e2c3789feaaa", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431", Pod:"coredns-5dd5756b68-wfv5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90903a6d195", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.848 [INFO][4847] k8s.go 608: Cleaning up netns ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.849 [INFO][4847] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" iface="eth0" netns="" Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.849 [INFO][4847] k8s.go 615: Releasing IP address(es) ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.849 [INFO][4847] utils.go 188: Calico CNI releasing IP address ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.867 [INFO][4855] ipam_plugin.go 411: Releasing address using handleID ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.867 [INFO][4855] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.867 [INFO][4855] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.875 [WARNING][4855] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.875 [INFO][4855] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.876 [INFO][4855] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:46.879550 containerd[1440]: 2024-07-02 09:13:46.877 [INFO][4847] k8s.go 621: Teardown processing complete. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:46.880632 containerd[1440]: time="2024-07-02T09:13:46.879579337Z" level=info msg="TearDown network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\" successfully" Jul 2 09:13:46.880632 containerd[1440]: time="2024-07-02T09:13:46.879604737Z" level=info msg="StopPodSandbox for \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\" returns successfully" Jul 2 09:13:46.880632 containerd[1440]: time="2024-07-02T09:13:46.880013217Z" level=info msg="RemovePodSandbox for \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\"" Jul 2 09:13:46.880632 containerd[1440]: time="2024-07-02T09:13:46.880072217Z" level=info msg="Forcibly stopping sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\"" Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.912 [WARNING][4877] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wfv5c-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ff42ceab-fd69-4ad7-97c0-e2c3789feaaa", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a500019167736e4b7056fefd6ad493d825cd861f0b6fc16bb20a249521fe431", Pod:"coredns-5dd5756b68-wfv5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90903a6d195", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.912 [INFO][4877] k8s.go 608: Cleaning up netns ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.912 [INFO][4877] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" iface="eth0" netns="" Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.912 [INFO][4877] k8s.go 615: Releasing IP address(es) ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.912 [INFO][4877] utils.go 188: Calico CNI releasing IP address ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.929 [INFO][4884] ipam_plugin.go 411: Releasing address using handleID ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.929 [INFO][4884] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.929 [INFO][4884] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.937 [WARNING][4884] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.937 [INFO][4884] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" HandleID="k8s-pod-network.ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Workload="localhost-k8s-coredns--5dd5756b68--wfv5c-eth0" Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.938 [INFO][4884] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:46.941234 containerd[1440]: 2024-07-02 09:13:46.939 [INFO][4877] k8s.go 621: Teardown processing complete. ContainerID="ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f" Jul 2 09:13:46.941683 containerd[1440]: time="2024-07-02T09:13:46.941654885Z" level=info msg="TearDown network for sandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\" successfully" Jul 2 09:13:46.944228 containerd[1440]: time="2024-07-02T09:13:46.944198925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:13:46.944371 containerd[1440]: time="2024-07-02T09:13:46.944354365Z" level=info msg="RemovePodSandbox \"ff6cf5cc4d2b50791f08ef6bd9fecbf3405f737b40b9f56e609ee6a4e421e50f\" returns successfully" Jul 2 09:13:46.944936 containerd[1440]: time="2024-07-02T09:13:46.944843965Z" level=info msg="StopPodSandbox for \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\"" Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:46.975 [WARNING][4908] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j6v9l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a", Pod:"csi-node-driver-j6v9l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8148568124f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:46.975 [INFO][4908] k8s.go 608: Cleaning up netns ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:46.975 [INFO][4908] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" iface="eth0" netns="" Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:46.975 [INFO][4908] k8s.go 615: Releasing IP address(es) ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:46.975 [INFO][4908] utils.go 188: Calico CNI releasing IP address ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:46.993 [INFO][4916] ipam_plugin.go 411: Releasing address using handleID ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:46.993 [INFO][4916] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:46.993 [INFO][4916] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:47.001 [WARNING][4916] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:47.001 [INFO][4916] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:47.003 [INFO][4916] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:47.005895 containerd[1440]: 2024-07-02 09:13:47.004 [INFO][4908] k8s.go 621: Teardown processing complete. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:47.006395 containerd[1440]: time="2024-07-02T09:13:47.005921353Z" level=info msg="TearDown network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\" successfully" Jul 2 09:13:47.006395 containerd[1440]: time="2024-07-02T09:13:47.005946193Z" level=info msg="StopPodSandbox for \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\" returns successfully" Jul 2 09:13:47.006804 containerd[1440]: time="2024-07-02T09:13:47.006777353Z" level=info msg="RemovePodSandbox for \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\"" Jul 2 09:13:47.006866 containerd[1440]: time="2024-07-02T09:13:47.006823233Z" level=info msg="Forcibly stopping sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\"" Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.038 [WARNING][4938] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j6v9l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e5ec1b3-c9b9-48d9-b154-86cd38677ba2", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f1c6b948d7e8141871e14533f30737be4284f1def5a078544919218da8ea06a", Pod:"csi-node-driver-j6v9l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8148568124f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.038 [INFO][4938] k8s.go 608: Cleaning up netns ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.038 [INFO][4938] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" iface="eth0" netns="" Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.038 [INFO][4938] k8s.go 615: Releasing IP address(es) ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.039 [INFO][4938] utils.go 188: Calico CNI releasing IP address ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.056 [INFO][4946] ipam_plugin.go 411: Releasing address using handleID ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.056 [INFO][4946] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.056 [INFO][4946] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.064 [WARNING][4946] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.064 [INFO][4946] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" HandleID="k8s-pod-network.b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Workload="localhost-k8s-csi--node--driver--j6v9l-eth0" Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.065 [INFO][4946] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:13:47.068960 containerd[1440]: 2024-07-02 09:13:47.067 [INFO][4938] k8s.go 621: Teardown processing complete. ContainerID="b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e" Jul 2 09:13:47.070510 containerd[1440]: time="2024-07-02T09:13:47.069393622Z" level=info msg="TearDown network for sandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\" successfully" Jul 2 09:13:47.072046 containerd[1440]: time="2024-07-02T09:13:47.072008782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:13:47.072206 containerd[1440]: time="2024-07-02T09:13:47.072186542Z" level=info msg="RemovePodSandbox \"b98f0df26ca476dca98d25a4d0b2ec71400d832905b77e8eb30f4c335876247e\" returns successfully" Jul 2 09:13:47.281910 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:49788.service - OpenSSH per-connection server daemon (10.0.0.1:49788). Jul 2 09:13:47.334079 sshd[4955]: Accepted publickey for core from 10.0.0.1 port 49788 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:47.335363 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:47.339018 systemd-logind[1419]: New session 13 of user core. Jul 2 09:13:47.347196 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 09:13:47.468630 sshd[4955]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:47.477779 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:49788.service: Deactivated successfully. Jul 2 09:13:47.479584 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 09:13:47.480905 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Jul 2 09:13:47.482501 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:49800.service - OpenSSH per-connection server daemon (10.0.0.1:49800). Jul 2 09:13:47.483351 systemd-logind[1419]: Removed session 13. Jul 2 09:13:47.517279 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 49800 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:47.518463 sshd[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:47.523013 systemd-logind[1419]: New session 14 of user core. Jul 2 09:13:47.534203 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 09:13:47.769516 sshd[4971]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:47.781495 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:49800.service: Deactivated successfully. Jul 2 09:13:47.783744 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 09:13:47.785788 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Jul 2 09:13:47.796466 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:49808.service - OpenSSH per-connection server daemon (10.0.0.1:49808). Jul 2 09:13:47.797742 systemd-logind[1419]: Removed session 14. Jul 2 09:13:47.827474 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 49808 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:47.828695 sshd[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:47.832501 systemd-logind[1419]: New session 15 of user core. Jul 2 09:13:47.841230 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 09:13:47.958709 sshd[4984]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:47.962828 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:49808.service: Deactivated successfully. Jul 2 09:13:47.964871 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 09:13:47.965722 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Jul 2 09:13:47.966501 systemd-logind[1419]: Removed session 15. Jul 2 09:13:52.973829 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:41030.service - OpenSSH per-connection server daemon (10.0.0.1:41030). Jul 2 09:13:53.011679 sshd[5004]: Accepted publickey for core from 10.0.0.1 port 41030 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:53.012805 sshd[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:53.016420 systemd-logind[1419]: New session 16 of user core. Jul 2 09:13:53.024180 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 09:13:53.126976 sshd[5004]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:53.129782 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:41030.service: Deactivated successfully. Jul 2 09:13:53.131507 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 09:13:53.132132 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Jul 2 09:13:53.133340 systemd-logind[1419]: Removed session 16. Jul 2 09:13:58.137983 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:41044.service - OpenSSH per-connection server daemon (10.0.0.1:41044). Jul 2 09:13:58.173145 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 41044 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:13:58.174275 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:13:58.177948 systemd-logind[1419]: New session 17 of user core. Jul 2 09:13:58.188221 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 09:13:58.292418 sshd[5045]: pam_unix(sshd:session): session closed for user core Jul 2 09:13:58.295555 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:41044.service: Deactivated successfully. Jul 2 09:13:58.299214 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 09:13:58.299860 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Jul 2 09:13:58.300746 systemd-logind[1419]: Removed session 17. Jul 2 09:14:03.302741 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:44834.service - OpenSSH per-connection server daemon (10.0.0.1:44834). Jul 2 09:14:03.337205 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 44834 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:14:03.338347 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:14:03.341483 systemd-logind[1419]: New session 18 of user core. Jul 2 09:14:03.352197 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 09:14:03.462146 sshd[5061]: pam_unix(sshd:session): session closed for user core Jul 2 09:14:03.470665 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:44834.service: Deactivated successfully. Jul 2 09:14:03.473282 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 09:14:03.474562 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Jul 2 09:14:03.484302 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:44842.service - OpenSSH per-connection server daemon (10.0.0.1:44842). Jul 2 09:14:03.485208 systemd-logind[1419]: Removed session 18. Jul 2 09:14:03.514860 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 44842 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:14:03.516078 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:14:03.519642 systemd-logind[1419]: New session 19 of user core. Jul 2 09:14:03.526174 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:14:03.722710 sshd[5076]: pam_unix(sshd:session): session closed for user core Jul 2 09:14:03.730962 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:44842.service: Deactivated successfully. Jul 2 09:14:03.732625 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:14:03.733816 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:14:03.742532 systemd[1]: Started sshd@19-10.0.0.88:22-10.0.0.1:44858.service - OpenSSH per-connection server daemon (10.0.0.1:44858). Jul 2 09:14:03.743901 systemd-logind[1419]: Removed session 19. Jul 2 09:14:03.773050 sshd[5088]: Accepted publickey for core from 10.0.0.1 port 44858 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:14:03.774198 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:14:03.778526 systemd-logind[1419]: New session 20 of user core. Jul 2 09:14:03.788213 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:14:04.548848 sshd[5088]: pam_unix(sshd:session): session closed for user core Jul 2 09:14:04.558219 systemd[1]: sshd@19-10.0.0.88:22-10.0.0.1:44858.service: Deactivated successfully. Jul 2 09:14:04.562469 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:14:04.564980 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:14:04.575900 systemd[1]: Started sshd@20-10.0.0.88:22-10.0.0.1:44866.service - OpenSSH per-connection server daemon (10.0.0.1:44866). Jul 2 09:14:04.580021 systemd-logind[1419]: Removed session 20. Jul 2 09:14:04.613986 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 44866 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:14:04.615316 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:14:04.619364 systemd-logind[1419]: New session 21 of user core. Jul 2 09:14:04.629188 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 09:14:04.902444 sshd[5110]: pam_unix(sshd:session): session closed for user core Jul 2 09:14:04.912737 systemd[1]: sshd@20-10.0.0.88:22-10.0.0.1:44866.service: Deactivated successfully. Jul 2 09:14:04.914594 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 09:14:04.916497 systemd-logind[1419]: Session 21 logged out. Waiting for processes to exit. Jul 2 09:14:04.922695 systemd[1]: Started sshd@21-10.0.0.88:22-10.0.0.1:44878.service - OpenSSH per-connection server daemon (10.0.0.1:44878). Jul 2 09:14:04.926919 systemd-logind[1419]: Removed session 21. Jul 2 09:14:04.953543 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 44878 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:14:04.954929 sshd[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:14:04.961542 systemd-logind[1419]: New session 22 of user core. Jul 2 09:14:04.970187 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 09:14:05.079303 sshd[5123]: pam_unix(sshd:session): session closed for user core Jul 2 09:14:05.083327 systemd[1]: sshd@21-10.0.0.88:22-10.0.0.1:44878.service: Deactivated successfully. Jul 2 09:14:05.085010 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 09:14:05.087118 systemd-logind[1419]: Session 22 logged out. Waiting for processes to exit. Jul 2 09:14:05.089398 systemd-logind[1419]: Removed session 22. Jul 2 09:14:05.521281 kubelet[2492]: E0702 09:14:05.521248 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:14:07.520746 kubelet[2492]: E0702 09:14:07.520665 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:14:10.090310 systemd[1]: Started sshd@22-10.0.0.88:22-10.0.0.1:44884.service - OpenSSH per-connection server daemon (10.0.0.1:44884). Jul 2 09:14:10.126441 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 44884 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:14:10.127543 sshd[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:14:10.134294 systemd-logind[1419]: New session 23 of user core. Jul 2 09:14:10.145260 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 09:14:10.260194 sshd[5153]: pam_unix(sshd:session): session closed for user core Jul 2 09:14:10.264654 systemd[1]: sshd@22-10.0.0.88:22-10.0.0.1:44884.service: Deactivated successfully. Jul 2 09:14:10.266690 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 09:14:10.268092 systemd-logind[1419]: Session 23 logged out. Waiting for processes to exit. Jul 2 09:14:10.268967 systemd-logind[1419]: Removed session 23. Jul 2 09:14:10.556763 kubelet[2492]: I0702 09:14:10.556716 2492 topology_manager.go:215] "Topology Admit Handler" podUID="47ec97f0-306c-40fb-a65f-d908a91a2107" podNamespace="calico-apiserver" podName="calico-apiserver-686dd85c8d-txqcc" Jul 2 09:14:10.573004 systemd[1]: Created slice kubepods-besteffort-pod47ec97f0_306c_40fb_a65f_d908a91a2107.slice - libcontainer container kubepods-besteffort-pod47ec97f0_306c_40fb_a65f_d908a91a2107.slice. Jul 2 09:14:10.741736 kubelet[2492]: I0702 09:14:10.741704 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47ec97f0-306c-40fb-a65f-d908a91a2107-calico-apiserver-certs\") pod \"calico-apiserver-686dd85c8d-txqcc\" (UID: \"47ec97f0-306c-40fb-a65f-d908a91a2107\") " pod="calico-apiserver/calico-apiserver-686dd85c8d-txqcc" Jul 2 09:14:10.741736 kubelet[2492]: I0702 09:14:10.741752 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshgr\" (UniqueName: \"kubernetes.io/projected/47ec97f0-306c-40fb-a65f-d908a91a2107-kube-api-access-vshgr\") pod \"calico-apiserver-686dd85c8d-txqcc\" (UID: \"47ec97f0-306c-40fb-a65f-d908a91a2107\") " pod="calico-apiserver/calico-apiserver-686dd85c8d-txqcc" Jul 2 09:14:10.843155 kubelet[2492]: E0702 09:14:10.842264 2492 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:14:10.843155 kubelet[2492]: E0702 09:14:10.842340 2492 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47ec97f0-306c-40fb-a65f-d908a91a2107-calico-apiserver-certs podName:47ec97f0-306c-40fb-a65f-d908a91a2107 nodeName:}" failed. No retries permitted until 2024-07-02 09:14:11.342320563 +0000 UTC m=+84.918702068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/47ec97f0-306c-40fb-a65f-d908a91a2107-calico-apiserver-certs") pod "calico-apiserver-686dd85c8d-txqcc" (UID: "47ec97f0-306c-40fb-a65f-d908a91a2107") : secret "calico-apiserver-certs" not found Jul 2 09:14:11.345814 kubelet[2492]: E0702 09:14:11.345773 2492 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:14:11.345925 kubelet[2492]: E0702 09:14:11.345843 2492 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47ec97f0-306c-40fb-a65f-d908a91a2107-calico-apiserver-certs podName:47ec97f0-306c-40fb-a65f-d908a91a2107 nodeName:}" failed. No retries permitted until 2024-07-02 09:14:12.345827366 +0000 UTC m=+85.922208831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/47ec97f0-306c-40fb-a65f-d908a91a2107-calico-apiserver-certs") pod "calico-apiserver-686dd85c8d-txqcc" (UID: "47ec97f0-306c-40fb-a65f-d908a91a2107") : secret "calico-apiserver-certs" not found Jul 2 09:14:12.375880 containerd[1440]: time="2024-07-02T09:14:12.375837315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-686dd85c8d-txqcc,Uid:47ec97f0-306c-40fb-a65f-d908a91a2107,Namespace:calico-apiserver,Attempt:0,}" Jul 2 09:14:12.490287 systemd-networkd[1375]: cali3e652ae59aa: Link UP Jul 2 09:14:12.490472 systemd-networkd[1375]: cali3e652ae59aa: Gained carrier Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.419 [INFO][5179] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0 calico-apiserver-686dd85c8d- calico-apiserver 47ec97f0-306c-40fb-a65f-d908a91a2107 1149 0 2024-07-02 09:14:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:686dd85c8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-686dd85c8d-txqcc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3e652ae59aa [] []}} ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Namespace="calico-apiserver" Pod="calico-apiserver-686dd85c8d-txqcc" WorkloadEndpoint="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.420 [INFO][5179] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Namespace="calico-apiserver" Pod="calico-apiserver-686dd85c8d-txqcc" WorkloadEndpoint="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.446 [INFO][5188] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" HandleID="k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Workload="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.459 [INFO][5188] ipam_plugin.go 264: Auto assigning IP ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" HandleID="k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Workload="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000687420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-686dd85c8d-txqcc", "timestamp":"2024-07-02 09:14:12.446614202 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.459 [INFO][5188] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.459 [INFO][5188] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.459 [INFO][5188] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.460 [INFO][5188] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.465 [INFO][5188] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.470 [INFO][5188] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.471 [INFO][5188] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.474 [INFO][5188] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.474 [INFO][5188] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.475 [INFO][5188] ipam.go 1685: Creating new handle: k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.479 [INFO][5188] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.484 [INFO][5188] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.484 [INFO][5188] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" host="localhost" Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.484 [INFO][5188] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:14:12.501745 containerd[1440]: 2024-07-02 09:14:12.484 [INFO][5188] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" HandleID="k8s-pod-network.a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Workload="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" Jul 2 09:14:12.502729 containerd[1440]: 2024-07-02 09:14:12.487 [INFO][5179] k8s.go 386: Populated endpoint ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Namespace="calico-apiserver" Pod="calico-apiserver-686dd85c8d-txqcc" WorkloadEndpoint="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0", GenerateName:"calico-apiserver-686dd85c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"47ec97f0-306c-40fb-a65f-d908a91a2107", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"686dd85c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-686dd85c8d-txqcc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e652ae59aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:14:12.502729 containerd[1440]: 2024-07-02 09:14:12.487 [INFO][5179] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Namespace="calico-apiserver" Pod="calico-apiserver-686dd85c8d-txqcc" WorkloadEndpoint="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" Jul 2 09:14:12.502729 containerd[1440]: 2024-07-02 09:14:12.487 [INFO][5179] dataplane_linux.go 68: Setting the host side veth name to cali3e652ae59aa ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Namespace="calico-apiserver" Pod="calico-apiserver-686dd85c8d-txqcc" WorkloadEndpoint="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" Jul 2 09:14:12.502729 containerd[1440]: 2024-07-02 09:14:12.491 [INFO][5179] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Namespace="calico-apiserver" Pod="calico-apiserver-686dd85c8d-txqcc" WorkloadEndpoint="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" Jul 2 09:14:12.502729 containerd[1440]: 2024-07-02 09:14:12.491 [INFO][5179] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Namespace="calico-apiserver" Pod="calico-apiserver-686dd85c8d-txqcc" WorkloadEndpoint="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0", GenerateName:"calico-apiserver-686dd85c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"47ec97f0-306c-40fb-a65f-d908a91a2107", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"686dd85c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed", Pod:"calico-apiserver-686dd85c8d-txqcc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e652ae59aa", MAC:"be:10:8d:d7:bf:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:14:12.502729 containerd[1440]: 2024-07-02 09:14:12.499 [INFO][5179] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed" Namespace="calico-apiserver" Pod="calico-apiserver-686dd85c8d-txqcc" WorkloadEndpoint="localhost-k8s-calico--apiserver--686dd85c8d--txqcc-eth0" Jul 2 09:14:12.524981 containerd[1440]: time="2024-07-02T09:14:12.524503385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:14:12.524981 containerd[1440]: time="2024-07-02T09:14:12.524589105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:14:12.524981 containerd[1440]: time="2024-07-02T09:14:12.524629985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:14:12.524981 containerd[1440]: time="2024-07-02T09:14:12.524646585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:14:12.545224 systemd[1]: Started cri-containerd-a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed.scope - libcontainer container a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed. Jul 2 09:14:12.556727 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:14:12.581974 containerd[1440]: time="2024-07-02T09:14:12.581938676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-686dd85c8d-txqcc,Uid:47ec97f0-306c-40fb-a65f-d908a91a2107,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed\"" Jul 2 09:14:12.583915 containerd[1440]: time="2024-07-02T09:14:12.583887229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 09:14:13.835081 containerd[1440]: time="2024-07-02T09:14:13.835023974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:14:13.835936 containerd[1440]: time="2024-07-02T09:14:13.835562693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 09:14:13.836985 containerd[1440]: time="2024-07-02T09:14:13.836491450Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:14:13.838861 containerd[1440]: time="2024-07-02T09:14:13.838830562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:14:13.839831 containerd[1440]: time="2024-07-02T09:14:13.839771199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.25582473s" Jul 2 09:14:13.839831 containerd[1440]: time="2024-07-02T09:14:13.839812039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 09:14:13.843008 containerd[1440]: time="2024-07-02T09:14:13.842975709Z" level=info msg="CreateContainer within sandbox \"a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 09:14:13.856147 containerd[1440]: time="2024-07-02T09:14:13.856102387Z" level=info msg="CreateContainer within sandbox \"a2e23a4290669d6803229d752525c33daf6a778f90a3a05f39f0996ecef960ed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4397b75268305c1567b9e6d3662d511268f54434724278f71a2d164fc87c4910\"" Jul 2 09:14:13.858242 containerd[1440]: time="2024-07-02T09:14:13.858208860Z" level=info msg="StartContainer for \"4397b75268305c1567b9e6d3662d511268f54434724278f71a2d164fc87c4910\"" Jul 2 09:14:13.880544 systemd[1]: run-containerd-runc-k8s.io-4397b75268305c1567b9e6d3662d511268f54434724278f71a2d164fc87c4910-runc.2xeYh5.mount: Deactivated successfully. Jul 2 09:14:13.892184 systemd[1]: Started cri-containerd-4397b75268305c1567b9e6d3662d511268f54434724278f71a2d164fc87c4910.scope - libcontainer container 4397b75268305c1567b9e6d3662d511268f54434724278f71a2d164fc87c4910. Jul 2 09:14:13.923841 containerd[1440]: time="2024-07-02T09:14:13.923785649Z" level=info msg="StartContainer for \"4397b75268305c1567b9e6d3662d511268f54434724278f71a2d164fc87c4910\" returns successfully" Jul 2 09:14:14.145710 systemd-networkd[1375]: cali3e652ae59aa: Gained IPv6LL Jul 2 09:14:14.783330 kubelet[2492]: I0702 09:14:14.783293 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-686dd85c8d-txqcc" podStartSLOduration=3.526735143 podCreationTimestamp="2024-07-02 09:14:10 +0000 UTC" firstStartedPulling="2024-07-02 09:14:12.583493311 +0000 UTC m=+86.159874816" lastFinishedPulling="2024-07-02 09:14:13.840005238 +0000 UTC m=+87.416386743" observedRunningTime="2024-07-02 09:14:14.780710678 +0000 UTC m=+88.357092183" watchObservedRunningTime="2024-07-02 09:14:14.78324707 +0000 UTC m=+88.359628575" Jul 2 09:14:15.275608 systemd[1]: Started sshd@23-10.0.0.88:22-10.0.0.1:33212.service - OpenSSH per-connection server daemon (10.0.0.1:33212). Jul 2 09:14:15.321392 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 33212 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:14:15.322743 sshd[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:14:15.326287 systemd-logind[1419]: New session 24 of user core. Jul 2 09:14:15.333235 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 09:14:15.446208 sshd[5306]: pam_unix(sshd:session): session closed for user core Jul 2 09:14:15.453307 systemd[1]: sshd@23-10.0.0.88:22-10.0.0.1:33212.service: Deactivated successfully. Jul 2 09:14:15.455449 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 09:14:15.458541 systemd-logind[1419]: Session 24 logged out. Waiting for processes to exit. Jul 2 09:14:15.460087 systemd-logind[1419]: Removed session 24. Jul 2 09:14:20.460943 systemd[1]: Started sshd@24-10.0.0.88:22-10.0.0.1:37880.service - OpenSSH per-connection server daemon (10.0.0.1:37880). Jul 2 09:14:20.499307 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 37880 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:14:20.500558 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:14:20.504187 systemd-logind[1419]: New session 25 of user core. Jul 2 09:14:20.523173 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:14:20.630653 sshd[5353]: pam_unix(sshd:session): session closed for user core Jul 2 09:14:20.634263 systemd[1]: sshd@24-10.0.0.88:22-10.0.0.1:37880.service: Deactivated successfully. Jul 2 09:14:20.636261 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:14:20.637126 systemd-logind[1419]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:14:20.637876 systemd-logind[1419]: Removed session 25.