Aug 5 22:06:16.919464 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 22:06:16.919489 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:37:57 -00 2024 Aug 5 22:06:16.919500 kernel: KASLR enabled Aug 5 22:06:16.919506 kernel: efi: EFI v2.7 by EDK II Aug 5 22:06:16.919513 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 5 22:06:16.919522 kernel: random: crng init done Aug 5 22:06:16.919532 kernel: ACPI: Early table checksum verification disabled Aug 5 22:06:16.919541 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 5 22:06:16.919548 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 5 22:06:16.919556 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919564 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919571 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919577 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919585 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919594 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919603 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919611 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919618 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:06:16.919625 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 5 22:06:16.919632 kernel: NUMA: Failed to initialise from firmware Aug 5 22:06:16.919640 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:06:16.919648 kernel: NUMA: NODE_DATA [mem 0xdc95b800-0xdc960fff] Aug 5 22:06:16.919655 kernel: Zone ranges: Aug 5 22:06:16.919662 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:06:16.919669 kernel: DMA32 empty Aug 5 22:06:16.919678 kernel: Normal empty Aug 5 22:06:16.919686 kernel: Movable zone start for each node Aug 5 22:06:16.919693 kernel: Early memory node ranges Aug 5 22:06:16.919700 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 5 22:06:16.919708 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 5 22:06:16.919715 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 5 22:06:16.919723 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 5 22:06:16.919730 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 5 22:06:16.919737 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 5 22:06:16.919745 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 5 22:06:16.919753 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:06:16.919760 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 5 22:06:16.919769 kernel: psci: probing for conduit method from ACPI. Aug 5 22:06:16.919776 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 22:06:16.919784 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 22:06:16.919794 kernel: psci: Trusted OS migration not required Aug 5 22:06:16.919802 kernel: psci: SMC Calling Convention v1.1 Aug 5 22:06:16.919811 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 5 22:06:16.919822 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 22:06:16.919830 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 22:06:16.919837 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 5 22:06:16.919844 kernel: Detected PIPT I-cache on CPU0 Aug 5 22:06:16.919852 kernel: CPU features: detected: GIC system register CPU interface Aug 5 22:06:16.919861 kernel: CPU features: detected: Hardware dirty bit management Aug 5 22:06:16.919891 kernel: CPU features: detected: Spectre-v4 Aug 5 22:06:16.919899 kernel: CPU features: detected: Spectre-BHB Aug 5 22:06:16.919907 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 22:06:16.919915 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 22:06:16.919925 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 22:06:16.919933 kernel: alternatives: applying boot alternatives Aug 5 22:06:16.919942 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4052403b8e39e55d48e6afcca927358798017aa0d33c868bc3038260a8d9be90 Aug 5 22:06:16.919952 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:06:16.919961 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:06:16.919969 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:06:16.919977 kernel: Fallback order for Node 0: 0 Aug 5 22:06:16.919985 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 5 22:06:16.919992 kernel: Policy zone: DMA Aug 5 22:06:16.920001 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:06:16.920009 kernel: software IO TLB: area num 4. Aug 5 22:06:16.920020 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 5 22:06:16.920029 kernel: Memory: 2386864K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185424K reserved, 0K cma-reserved) Aug 5 22:06:16.920038 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 22:06:16.920047 kernel: trace event string verifier disabled Aug 5 22:06:16.920057 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:06:16.920068 kernel: rcu: RCU event tracing is enabled. Aug 5 22:06:16.920076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 22:06:16.920085 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:06:16.920093 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:06:16.920101 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:06:16.920110 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 22:06:16.920120 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 22:06:16.920130 kernel: GICv3: 256 SPIs implemented Aug 5 22:06:16.920138 kernel: GICv3: 0 Extended SPIs implemented Aug 5 22:06:16.920146 kernel: Root IRQ handler: gic_handle_irq Aug 5 22:06:16.920154 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 22:06:16.920162 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 5 22:06:16.920170 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 5 22:06:16.920178 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 22:06:16.920187 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Aug 5 22:06:16.920197 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 5 22:06:16.920203 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 5 22:06:16.920210 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:06:16.920218 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:06:16.920225 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 22:06:16.920232 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 22:06:16.920239 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 22:06:16.920246 kernel: arm-pv: using stolen time PV Aug 5 22:06:16.920253 kernel: Console: colour dummy device 80x25 Aug 5 22:06:16.920260 kernel: ACPI: Core revision 20230628 Aug 5 22:06:16.920267 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 22:06:16.920274 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:06:16.920281 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:06:16.920290 kernel: SELinux: Initializing. Aug 5 22:06:16.920297 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:06:16.920304 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:06:16.920311 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:06:16.920318 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:06:16.920325 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:06:16.920332 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:06:16.920339 kernel: Platform MSI: ITS@0x8080000 domain created Aug 5 22:06:16.920346 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 5 22:06:16.920354 kernel: Remapping and enabling EFI services. Aug 5 22:06:16.920361 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:06:16.920368 kernel: Detected PIPT I-cache on CPU1 Aug 5 22:06:16.920382 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 5 22:06:16.920390 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 5 22:06:16.920397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:06:16.920404 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 22:06:16.920411 kernel: Detected PIPT I-cache on CPU2 Aug 5 22:06:16.920418 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 5 22:06:16.920425 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 5 22:06:16.920434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:06:16.920441 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 5 22:06:16.920452 kernel: Detected PIPT I-cache on CPU3 Aug 5 22:06:16.920461 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 5 22:06:16.920469 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 5 22:06:16.920476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:06:16.920483 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 5 22:06:16.920491 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 22:06:16.920498 kernel: SMP: Total of 4 processors activated. Aug 5 22:06:16.920507 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 22:06:16.920514 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 22:06:16.920521 kernel: CPU features: detected: Common not Private translations Aug 5 22:06:16.920528 kernel: CPU features: detected: CRC32 instructions Aug 5 22:06:16.920536 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 5 22:06:16.920543 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 22:06:16.920550 kernel: CPU features: detected: LSE atomic instructions Aug 5 22:06:16.920557 kernel: CPU features: detected: Privileged Access Never Aug 5 22:06:16.920566 kernel: CPU features: detected: RAS Extension Support Aug 5 22:06:16.920573 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 5 22:06:16.920580 kernel: CPU: All CPU(s) started at EL1 Aug 5 22:06:16.920588 kernel: alternatives: applying system-wide alternatives Aug 5 22:06:16.920595 kernel: devtmpfs: initialized Aug 5 22:06:16.920602 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:06:16.920610 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 22:06:16.920617 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:06:16.920624 kernel: SMBIOS 3.0.0 present. Aug 5 22:06:16.920633 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 5 22:06:16.920641 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:06:16.920648 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 22:06:16.920655 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 22:06:16.920663 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 22:06:16.920670 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:06:16.920678 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Aug 5 22:06:16.920685 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:06:16.920692 kernel: cpuidle: using governor menu Aug 5 22:06:16.920701 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 22:06:16.920708 kernel: ASID allocator initialised with 32768 entries Aug 5 22:06:16.920716 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:06:16.920723 kernel: Serial: AMBA PL011 UART driver Aug 5 22:06:16.920730 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 22:06:16.920737 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 22:06:16.920744 kernel: Modules: 509120 pages in range for PLT usage Aug 5 22:06:16.920752 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:06:16.920759 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:06:16.920768 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 22:06:16.920776 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 22:06:16.920783 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:06:16.920790 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:06:16.920797 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 22:06:16.920805 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 22:06:16.920812 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:06:16.920819 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:06:16.920827 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:06:16.920835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:06:16.920843 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:06:16.920850 kernel: ACPI: Interpreter enabled Aug 5 22:06:16.920858 kernel: ACPI: Using GIC for interrupt routing Aug 5 22:06:16.920865 kernel: ACPI: MCFG table detected, 1 entries Aug 5 22:06:16.920878 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 5 22:06:16.920885 kernel: printk: console [ttyAMA0] enabled Aug 5 22:06:16.920893 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:06:16.921021 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:06:16.921096 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 22:06:16.921161 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 22:06:16.921225 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 5 22:06:16.921289 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 5 22:06:16.921299 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 5 22:06:16.921306 kernel: PCI host bridge to bus 0000:00 Aug 5 22:06:16.921385 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 5 22:06:16.921456 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 22:06:16.921515 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 5 22:06:16.921572 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:06:16.921657 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 5 22:06:16.921733 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:06:16.921802 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 5 22:06:16.921956 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 5 22:06:16.922030 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 22:06:16.922095 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 22:06:16.922161 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 5 22:06:16.922229 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 5 22:06:16.922300 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 5 22:06:16.922359 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 22:06:16.922431 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 5 22:06:16.922441 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 22:06:16.922449 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 22:06:16.922456 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 22:06:16.922464 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 22:06:16.922471 kernel: iommu: Default domain type: Translated Aug 5 22:06:16.922479 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 22:06:16.922486 kernel: efivars: Registered efivars operations Aug 5 22:06:16.922494 kernel: vgaarb: loaded Aug 5 22:06:16.922503 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 22:06:16.922511 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:06:16.922518 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:06:16.922525 kernel: pnp: PnP ACPI init Aug 5 22:06:16.922603 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 5 22:06:16.922613 kernel: pnp: PnP ACPI: found 1 devices Aug 5 22:06:16.922621 kernel: NET: Registered PF_INET protocol family Aug 5 22:06:16.922628 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:06:16.922637 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 22:06:16.922645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:06:16.922652 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:06:16.922660 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 22:06:16.922667 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 22:06:16.922675 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:06:16.922682 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:06:16.922689 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:06:16.922697 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:06:16.922705 kernel: kvm [1]: HYP mode not available Aug 5 22:06:16.922712 kernel: Initialise system trusted keyrings Aug 5 22:06:16.922728 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 22:06:16.922736 kernel: Key type asymmetric registered Aug 5 22:06:16.922744 kernel: Asymmetric key parser 'x509' registered Aug 5 22:06:16.922753 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 22:06:16.922762 kernel: io scheduler mq-deadline registered Aug 5 22:06:16.922770 kernel: io scheduler kyber registered Aug 5 22:06:16.922779 kernel: io scheduler bfq registered Aug 5 22:06:16.922793 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 22:06:16.922802 kernel: ACPI: button: Power Button [PWRB] Aug 5 22:06:16.922813 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 22:06:16.922931 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 5 22:06:16.922949 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:06:16.922960 kernel: thunder_xcv, ver 1.0 Aug 5 22:06:16.922970 kernel: thunder_bgx, ver 1.0 Aug 5 22:06:16.922980 kernel: nicpf, ver 1.0 Aug 5 22:06:16.922988 kernel: nicvf, ver 1.0 Aug 5 22:06:16.923126 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 22:06:16.923216 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T22:06:16 UTC (1722895576) Aug 5 22:06:16.923228 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 22:06:16.923239 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 5 22:06:16.923249 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 22:06:16.923260 kernel: watchdog: Hard watchdog permanently disabled Aug 5 22:06:16.923270 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:06:16.923280 kernel: Segment Routing with IPv6 Aug 5 22:06:16.923295 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:06:16.923304 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:06:16.923313 kernel: Key type dns_resolver registered Aug 5 22:06:16.923323 kernel: registered taskstats version 1 Aug 5 22:06:16.923334 kernel: Loading compiled-in X.509 certificates Aug 5 22:06:16.923345 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 99cab5c9e2f0f3a5ca972c2df7b3d6ed64d627d4' Aug 5 22:06:16.923355 kernel: Key type .fscrypt registered Aug 5 22:06:16.923364 kernel: Key type fscrypt-provisioning registered Aug 5 22:06:16.923373 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:06:16.923389 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:06:16.923402 kernel: ima: No architecture policies found Aug 5 22:06:16.923411 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 22:06:16.923420 kernel: clk: Disabling unused clocks Aug 5 22:06:16.923428 kernel: Freeing unused kernel memory: 39040K Aug 5 22:06:16.923435 kernel: Run /init as init process Aug 5 22:06:16.923442 kernel: with arguments: Aug 5 22:06:16.923449 kernel: /init Aug 5 22:06:16.923456 kernel: with environment: Aug 5 22:06:16.923465 kernel: HOME=/ Aug 5 22:06:16.923472 kernel: TERM=linux Aug 5 22:06:16.923479 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:06:16.923488 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:06:16.923498 systemd[1]: Detected virtualization kvm. Aug 5 22:06:16.923505 systemd[1]: Detected architecture arm64. Aug 5 22:06:16.923513 systemd[1]: Running in initrd. Aug 5 22:06:16.923521 systemd[1]: No hostname configured, using default hostname. Aug 5 22:06:16.923530 systemd[1]: Hostname set to . Aug 5 22:06:16.923538 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:06:16.923546 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:06:16.923554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:06:16.923562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:06:16.923570 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:06:16.923579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:06:16.923586 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:06:16.923596 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:06:16.923605 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:06:16.923613 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:06:16.923621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:06:16.923629 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:06:16.923637 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:06:16.923647 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:06:16.923655 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:06:16.923663 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:06:16.923671 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:06:16.923679 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:06:16.923687 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:06:16.923695 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:06:16.923703 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:06:16.923711 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:06:16.923720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:06:16.923728 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:06:16.923736 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:06:16.923744 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:06:16.923752 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:06:16.923760 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:06:16.923768 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:06:16.923776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:06:16.923784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:06:16.923793 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:06:16.923801 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:06:16.923809 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:06:16.923833 systemd-journald[238]: Collecting audit messages is disabled. Aug 5 22:06:16.923855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:06:16.923863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:06:16.923904 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:06:16.923913 systemd-journald[238]: Journal started Aug 5 22:06:16.923934 systemd-journald[238]: Runtime Journal (/run/log/journal/2057f2524d574eb7843baa6dfeb9fb57) is 5.9M, max 47.3M, 41.4M free. Aug 5 22:06:16.904973 systemd-modules-load[239]: Inserted module 'overlay' Aug 5 22:06:16.926616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:06:16.928432 systemd-modules-load[239]: Inserted module 'br_netfilter' Aug 5 22:06:16.931195 kernel: Bridge firewalling registered Aug 5 22:06:16.931215 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:06:16.931636 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:06:16.933057 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:06:16.937905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:06:16.940553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:06:16.944071 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:06:16.952778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:06:16.954241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:06:16.956186 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:06:16.958243 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:06:16.971021 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:06:16.973239 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:06:16.986219 dracut-cmdline[277]: dracut-dracut-053 Aug 5 22:06:16.989226 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4052403b8e39e55d48e6afcca927358798017aa0d33c868bc3038260a8d9be90 Aug 5 22:06:17.013433 systemd-resolved[281]: Positive Trust Anchors: Aug 5 22:06:17.013451 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:06:17.013482 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:06:17.019302 systemd-resolved[281]: Defaulting to hostname 'linux'. Aug 5 22:06:17.021290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:06:17.022475 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:06:17.068898 kernel: SCSI subsystem initialized Aug 5 22:06:17.073887 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:06:17.081892 kernel: iscsi: registered transport (tcp) Aug 5 22:06:17.095890 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:06:17.095906 kernel: QLogic iSCSI HBA Driver Aug 5 22:06:17.141741 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:06:17.151996 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:06:17.168008 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:06:17.168054 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:06:17.175893 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:06:17.222893 kernel: raid6: neonx8 gen() 15757 MB/s Aug 5 22:06:17.239882 kernel: raid6: neonx4 gen() 15647 MB/s Aug 5 22:06:17.256881 kernel: raid6: neonx2 gen() 13270 MB/s Aug 5 22:06:17.273879 kernel: raid6: neonx1 gen() 10464 MB/s Aug 5 22:06:17.290882 kernel: raid6: int64x8 gen() 6960 MB/s Aug 5 22:06:17.307881 kernel: raid6: int64x4 gen() 7344 MB/s Aug 5 22:06:17.324883 kernel: raid6: int64x2 gen() 6112 MB/s Aug 5 22:06:17.341879 kernel: raid6: int64x1 gen() 5052 MB/s Aug 5 22:06:17.341892 kernel: raid6: using algorithm neonx8 gen() 15757 MB/s Aug 5 22:06:17.359021 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Aug 5 22:06:17.359037 kernel: raid6: using neon recovery algorithm Aug 5 22:06:17.363885 kernel: xor: measuring software checksum speed Aug 5 22:06:17.364887 kernel: 8regs : 19869 MB/sec Aug 5 22:06:17.365888 kernel: 32regs : 19725 MB/sec Aug 5 22:06:17.367213 kernel: arm64_neon : 27089 MB/sec Aug 5 22:06:17.367226 kernel: xor: using function: arm64_neon (27089 MB/sec) Aug 5 22:06:17.420226 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:06:17.431519 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:06:17.441052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:06:17.452823 systemd-udevd[463]: Using default interface naming scheme 'v255'. Aug 5 22:06:17.455948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:06:17.468193 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:06:17.479300 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Aug 5 22:06:17.504994 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:06:17.513033 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:06:17.552056 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:06:17.560056 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:06:17.575052 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:06:17.576674 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:06:17.578507 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:06:17.580490 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:06:17.592080 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:06:17.600243 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 5 22:06:17.604713 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 22:06:17.604826 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:06:17.604839 kernel: GPT:9289727 != 19775487 Aug 5 22:06:17.604849 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:06:17.604858 kernel: GPT:9289727 != 19775487 Aug 5 22:06:17.604878 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:06:17.604889 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:06:17.602648 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:06:17.610607 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:06:17.610721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:06:17.616733 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:06:17.618001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:06:17.618199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:06:17.622144 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:06:17.628442 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (521) Aug 5 22:06:17.628466 kernel: BTRFS: device fsid 278882ec-4175-45f0-a12b-7fddc0d6d9a3 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (525) Aug 5 22:06:17.632081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:06:17.642741 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:06:17.644200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:06:17.653181 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:06:17.660330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:06:17.664187 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:06:17.665435 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:06:17.681013 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:06:17.682712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:06:17.688493 disk-uuid[553]: Primary Header is updated. Aug 5 22:06:17.688493 disk-uuid[553]: Secondary Entries is updated. Aug 5 22:06:17.688493 disk-uuid[553]: Secondary Header is updated. Aug 5 22:06:17.691886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:06:17.708223 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:06:18.700893 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:06:18.703623 disk-uuid[554]: The operation has completed successfully. Aug 5 22:06:18.724732 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:06:18.724849 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:06:18.747031 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:06:18.749986 sh[577]: Success Aug 5 22:06:18.768898 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 22:06:18.807269 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:06:18.808974 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:06:18.809830 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:06:18.820322 kernel: BTRFS info (device dm-0): first mount of filesystem 278882ec-4175-45f0-a12b-7fddc0d6d9a3 Aug 5 22:06:18.820374 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:06:18.820386 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:06:18.822102 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:06:18.822117 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:06:18.825738 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:06:18.827258 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:06:18.842058 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:06:18.843694 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:06:18.852376 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:06:18.852420 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:06:18.852909 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:06:18.855891 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:06:18.863082 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:06:18.864621 kernel: BTRFS info (device vda6): last unmount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:06:18.870811 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:06:18.878108 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:06:18.943573 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:06:18.959070 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:06:18.989601 ignition[674]: Ignition 2.18.0 Aug 5 22:06:18.989613 ignition[674]: Stage: fetch-offline Aug 5 22:06:18.989646 ignition[674]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:06:18.992527 systemd-networkd[767]: lo: Link UP Aug 5 22:06:18.989655 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:06:18.992530 systemd-networkd[767]: lo: Gained carrier Aug 5 22:06:18.989744 ignition[674]: parsed url from cmdline: "" Aug 5 22:06:18.993351 systemd-networkd[767]: Enumeration completed Aug 5 22:06:18.989747 ignition[674]: no config URL provided Aug 5 22:06:18.993944 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:06:18.989752 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:06:18.993948 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:06:18.989759 ignition[674]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:06:18.994799 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:06:18.989781 ignition[674]: op(1): [started] loading QEMU firmware config module Aug 5 22:06:18.995171 systemd-networkd[767]: eth0: Link UP Aug 5 22:06:18.989786 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 22:06:18.995175 systemd-networkd[767]: eth0: Gained carrier Aug 5 22:06:19.003608 ignition[674]: op(1): [finished] loading QEMU firmware config module Aug 5 22:06:18.995182 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:06:18.996035 systemd[1]: Reached target network.target - Network. Aug 5 22:06:19.010959 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:06:19.046976 ignition[674]: parsing config with SHA512: ca22edef03f1f502f6c5a3e53949db711c5e3af415a66489f57917171ea6b9e6e95221eaa8e3fe90350d6638b3815e5f466920291e64ab5cf0f3a796e278f904 Aug 5 22:06:19.050997 unknown[674]: fetched base config from "system" Aug 5 22:06:19.051006 unknown[674]: fetched user config from "qemu" Aug 5 22:06:19.051383 ignition[674]: fetch-offline: fetch-offline passed Aug 5 22:06:19.052855 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:06:19.051446 ignition[674]: Ignition finished successfully Aug 5 22:06:19.054288 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 22:06:19.065060 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:06:19.075595 ignition[773]: Ignition 2.18.0 Aug 5 22:06:19.075605 ignition[773]: Stage: kargs Aug 5 22:06:19.075759 ignition[773]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:06:19.075768 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:06:19.076596 ignition[773]: kargs: kargs passed Aug 5 22:06:19.078631 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:06:19.076649 ignition[773]: Ignition finished successfully Aug 5 22:06:19.096069 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:06:19.105503 ignition[782]: Ignition 2.18.0 Aug 5 22:06:19.105513 ignition[782]: Stage: disks Aug 5 22:06:19.105808 ignition[782]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:06:19.105818 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:06:19.107217 ignition[782]: disks: disks passed Aug 5 22:06:19.107269 ignition[782]: Ignition finished successfully Aug 5 22:06:19.110646 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:06:19.112137 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:06:19.113148 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:06:19.114750 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:06:19.116230 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:06:19.117614 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:06:19.131037 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:06:19.142334 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:06:19.146010 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:06:19.148148 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:06:19.193881 kernel: EXT4-fs (vda9): mounted filesystem 44c9fced-dca5-4347-a15f-96911c2e5e61 r/w with ordered data mode. Quota mode: none. Aug 5 22:06:19.194264 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:06:19.195327 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:06:19.211955 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:06:19.213978 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:06:19.214926 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:06:19.214967 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:06:19.215004 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:06:19.220497 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:06:19.222898 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:06:19.225886 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Aug 5 22:06:19.225917 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:06:19.226901 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:06:19.226927 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:06:19.230882 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:06:19.242086 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:06:19.284934 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:06:19.289035 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:06:19.292942 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:06:19.296783 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:06:19.369747 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:06:19.379021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:06:19.381672 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:06:19.386892 kernel: BTRFS info (device vda6): last unmount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:06:19.403426 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:06:19.405029 ignition[915]: INFO : Ignition 2.18.0 Aug 5 22:06:19.405029 ignition[915]: INFO : Stage: mount Aug 5 22:06:19.407607 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:06:19.407607 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:06:19.407607 ignition[915]: INFO : mount: mount passed Aug 5 22:06:19.407607 ignition[915]: INFO : Ignition finished successfully Aug 5 22:06:19.409263 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:06:19.428972 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:06:19.819376 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:06:19.829059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:06:19.835413 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Aug 5 22:06:19.835449 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:06:19.835460 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:06:19.836978 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:06:19.838885 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:06:19.840083 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:06:19.856741 ignition[946]: INFO : Ignition 2.18.0 Aug 5 22:06:19.856741 ignition[946]: INFO : Stage: files Aug 5 22:06:19.858299 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:06:19.858299 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:06:19.858299 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:06:19.861687 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:06:19.861687 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:06:19.863908 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:06:19.863908 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:06:19.863908 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:06:19.863597 unknown[946]: wrote ssh authorized keys file for user: core Aug 5 22:06:19.867842 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 22:06:19.867842 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 22:06:20.105481 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:06:20.148412 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Aug 5 22:06:20.150031 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Aug 5 22:06:20.475937 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:06:20.631192 systemd-networkd[767]: eth0: Gained IPv6LL Aug 5 22:06:20.699067 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Aug 5 22:06:20.699067 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:06:20.702256 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:06:20.702256 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:06:20.702256 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:06:20.702256 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 5 22:06:20.702256 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:06:20.702256 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:06:20.702256 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 5 22:06:20.702256 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 22:06:20.720319 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:06:20.723811 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:06:20.725170 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 22:06:20.725170 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:06:20.725170 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:06:20.725170 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:06:20.725170 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:06:20.725170 ignition[946]: INFO : files: files passed Aug 5 22:06:20.725170 ignition[946]: INFO : Ignition finished successfully Aug 5 22:06:20.726937 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:06:20.738016 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:06:20.740172 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:06:20.741596 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:06:20.741673 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:06:20.747252 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 22:06:20.749820 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:06:20.749820 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:06:20.752961 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:06:20.754300 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:06:20.755312 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:06:20.763016 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:06:20.782825 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:06:20.782970 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:06:20.784825 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:06:20.786461 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:06:20.788177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:06:20.788917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:06:20.803679 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:06:20.814079 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:06:20.821993 systemd[1]: Stopped target network.target - Network. Aug 5 22:06:20.822910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:06:20.824551 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:06:20.826378 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:06:20.827937 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:06:20.828057 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:06:20.830240 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:06:20.832081 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:06:20.833564 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:06:20.835055 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:06:20.836811 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:06:20.838443 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:06:20.839998 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:06:20.841619 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:06:20.843263 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:06:20.844673 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:06:20.846064 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:06:20.846177 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:06:20.848088 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:06:20.849708 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:06:20.851200 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:06:20.851929 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:06:20.853127 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:06:20.853236 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:06:20.855712 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:06:20.855822 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:06:20.857564 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:06:20.858837 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:06:20.860296 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:06:20.861483 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:06:20.862795 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:06:20.864486 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:06:20.864583 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:06:20.866270 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:06:20.866350 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:06:20.867778 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:06:20.867901 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:06:20.869521 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:06:20.869617 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:06:20.882018 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:06:20.883536 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:06:20.884520 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:06:20.885844 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:06:20.887279 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:06:20.887408 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:06:20.893921 ignition[1002]: INFO : Ignition 2.18.0 Aug 5 22:06:20.893921 ignition[1002]: INFO : Stage: umount Aug 5 22:06:20.888587 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:06:20.896555 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:06:20.896555 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:06:20.896555 ignition[1002]: INFO : umount: umount passed Aug 5 22:06:20.896555 ignition[1002]: INFO : Ignition finished successfully Aug 5 22:06:20.888689 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:06:20.894907 systemd-networkd[767]: eth0: DHCPv6 lease lost Aug 5 22:06:20.896316 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:06:20.896430 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:06:20.899687 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:06:20.900123 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:06:20.900226 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:06:20.902051 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:06:20.902834 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:06:20.904070 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:06:20.904159 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:06:20.906453 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:06:20.906547 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:06:20.909524 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:06:20.909591 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:06:20.911357 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:06:20.911420 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:06:20.913081 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:06:20.913128 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:06:20.914713 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:06:20.914759 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:06:20.916164 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:06:20.916209 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:06:20.917855 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:06:20.917909 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:06:20.925970 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:06:20.926893 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:06:20.926950 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:06:20.928721 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:06:20.928786 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:06:20.930496 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:06:20.930537 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:06:20.932165 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:06:20.932205 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:06:20.934112 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:06:20.944237 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:06:20.944327 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:06:20.957584 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:06:20.957722 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:06:20.959853 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:06:20.959921 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:06:20.961723 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:06:20.961754 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:06:20.963447 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:06:20.963495 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:06:20.966040 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:06:20.966082 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:06:20.968430 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:06:20.968474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:06:20.982999 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:06:20.984003 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:06:20.984059 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:06:20.986082 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:06:20.986126 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:06:20.988037 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:06:20.988084 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:06:20.990201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:06:20.990246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:06:20.992217 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:06:20.992330 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:06:20.995440 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:06:20.997031 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:06:21.007041 systemd[1]: Switching root. Aug 5 22:06:21.029771 systemd-journald[238]: Journal stopped Aug 5 22:06:21.685319 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Aug 5 22:06:21.685379 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:06:21.685393 kernel: SELinux: policy capability open_perms=1 Aug 5 22:06:21.685403 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:06:21.685414 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:06:21.685424 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:06:21.685434 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:06:21.685443 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:06:21.685452 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:06:21.685461 kernel: audit: type=1403 audit(1722895581.168:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:06:21.685475 systemd[1]: Successfully loaded SELinux policy in 31.792ms. Aug 5 22:06:21.685498 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.980ms. Aug 5 22:06:21.685509 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:06:21.685520 systemd[1]: Detected virtualization kvm. Aug 5 22:06:21.685530 systemd[1]: Detected architecture arm64. Aug 5 22:06:21.685541 systemd[1]: Detected first boot. Aug 5 22:06:21.685551 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:06:21.685562 zram_generator::config[1045]: No configuration found. Aug 5 22:06:21.685575 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:06:21.685588 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:06:21.685599 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:06:21.685609 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:06:21.685619 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:06:21.685630 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:06:21.685640 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:06:21.685651 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:06:21.685662 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:06:21.685675 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:06:21.685687 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:06:21.685698 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:06:21.685708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:06:21.685735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:06:21.685746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:06:21.685757 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:06:21.685768 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:06:21.685782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:06:21.685793 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 22:06:21.685803 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:06:21.685813 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:06:21.685825 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:06:21.685835 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:06:21.685846 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:06:21.685856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:06:21.685894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:06:21.685906 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:06:21.685917 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:06:21.685928 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:06:21.685939 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:06:21.685950 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:06:21.685960 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:06:21.685971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:06:21.685982 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:06:21.685996 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:06:21.686007 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:06:21.686017 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:06:21.686028 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:06:21.686038 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:06:21.686049 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:06:21.686060 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:06:21.686070 systemd[1]: Reached target machines.target - Containers. Aug 5 22:06:21.686080 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:06:21.686092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:06:21.686103 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:06:21.686113 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:06:21.686124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:06:21.686135 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:06:21.686145 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:06:21.686155 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:06:21.686166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:06:21.686179 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:06:21.686190 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:06:21.686201 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:06:21.686212 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:06:21.686223 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:06:21.686233 kernel: fuse: init (API version 7.39) Aug 5 22:06:21.686243 kernel: loop: module loaded Aug 5 22:06:21.686252 kernel: ACPI: bus type drm_connector registered Aug 5 22:06:21.686262 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:06:21.686274 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:06:21.686284 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:06:21.686295 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:06:21.686321 systemd-journald[1111]: Collecting audit messages is disabled. Aug 5 22:06:21.686342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:06:21.686353 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:06:21.686369 systemd-journald[1111]: Journal started Aug 5 22:06:21.686394 systemd-journald[1111]: Runtime Journal (/run/log/journal/2057f2524d574eb7843baa6dfeb9fb57) is 5.9M, max 47.3M, 41.4M free. Aug 5 22:06:21.516891 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:06:21.530314 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:06:21.530662 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:06:21.688286 systemd[1]: Stopped verity-setup.service. Aug 5 22:06:21.691541 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:06:21.692205 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:06:21.693269 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:06:21.694455 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:06:21.695555 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:06:21.696560 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:06:21.697701 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:06:21.698926 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:06:21.700265 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:06:21.701699 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:06:21.701856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:06:21.703261 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:06:21.703410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:06:21.704702 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:06:21.704841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:06:21.706113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:06:21.706253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:06:21.707969 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:06:21.710005 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:06:21.711072 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:06:21.711201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:06:21.712525 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:06:21.713842 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:06:21.715072 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:06:21.727396 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:06:21.736968 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:06:21.738934 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:06:21.739993 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:06:21.740035 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:06:21.741706 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:06:21.743950 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:06:21.745978 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:06:21.746952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:06:21.748337 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:06:21.750026 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:06:21.751191 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:06:21.755037 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:06:21.756125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:06:21.760214 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:06:21.763092 systemd-journald[1111]: Time spent on flushing to /var/log/journal/2057f2524d574eb7843baa6dfeb9fb57 is 21.244ms for 853 entries. Aug 5 22:06:21.763092 systemd-journald[1111]: System Journal (/var/log/journal/2057f2524d574eb7843baa6dfeb9fb57) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:06:21.791267 systemd-journald[1111]: Received client request to flush runtime journal. Aug 5 22:06:21.791315 kernel: loop0: detected capacity change from 0 to 113672 Aug 5 22:06:21.791341 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:06:21.763702 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:06:21.769292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:06:21.771915 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:06:21.773260 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:06:21.774500 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:06:21.775946 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:06:21.778222 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:06:21.784930 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:06:21.794190 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:06:21.801056 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:06:21.807943 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Aug 5 22:06:21.807957 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Aug 5 22:06:21.808194 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:06:21.809641 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:06:21.812901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:06:21.814723 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:06:21.821220 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:06:21.824008 kernel: loop1: detected capacity change from 0 to 194096 Aug 5 22:06:21.823398 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:06:21.826981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:06:21.828137 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:06:21.850909 kernel: loop2: detected capacity change from 0 to 59688 Aug 5 22:06:21.860087 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:06:21.866092 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:06:21.879107 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Aug 5 22:06:21.879128 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Aug 5 22:06:21.882819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:06:21.886898 kernel: loop3: detected capacity change from 0 to 113672 Aug 5 22:06:21.891884 kernel: loop4: detected capacity change from 0 to 194096 Aug 5 22:06:21.897894 kernel: loop5: detected capacity change from 0 to 59688 Aug 5 22:06:21.900057 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 22:06:21.900451 (sd-merge)[1183]: Merged extensions into '/usr'. Aug 5 22:06:21.904166 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:06:21.904278 systemd[1]: Reloading... Aug 5 22:06:21.955205 zram_generator::config[1208]: No configuration found. Aug 5 22:06:22.027958 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:06:22.050470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:06:22.088572 systemd[1]: Reloading finished in 183 ms. Aug 5 22:06:22.113178 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:06:22.114632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:06:22.126215 systemd[1]: Starting ensure-sysext.service... Aug 5 22:06:22.128141 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:06:22.140488 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:06:22.140508 systemd[1]: Reloading... Aug 5 22:06:22.155958 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:06:22.156213 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:06:22.156844 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:06:22.157200 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Aug 5 22:06:22.157251 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Aug 5 22:06:22.159316 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:06:22.159329 systemd-tmpfiles[1243]: Skipping /boot Aug 5 22:06:22.165567 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:06:22.165582 systemd-tmpfiles[1243]: Skipping /boot Aug 5 22:06:22.187899 zram_generator::config[1269]: No configuration found. Aug 5 22:06:22.269851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:06:22.307530 systemd[1]: Reloading finished in 166 ms. Aug 5 22:06:22.323637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:06:22.337322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:06:22.345393 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:06:22.347974 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:06:22.350487 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:06:22.356246 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:06:22.361170 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:06:22.366616 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:06:22.369682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:06:22.373113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:06:22.375640 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:06:22.379957 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:06:22.381139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:06:22.382080 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:06:22.386221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:06:22.386855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:06:22.388403 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:06:22.388543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:06:22.391163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:06:22.391370 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:06:22.397534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:06:22.401229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:06:22.401917 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Aug 5 22:06:22.407140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:06:22.411982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:06:22.413059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:06:22.415389 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:06:22.420890 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:06:22.422673 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:06:22.428025 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:06:22.429839 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:06:22.431723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:06:22.431859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:06:22.433465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:06:22.433606 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:06:22.435232 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:06:22.435382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:06:22.438982 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:06:22.454595 systemd[1]: Finished ensure-sysext.service. Aug 5 22:06:22.458119 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1339) Aug 5 22:06:22.458199 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1344) Aug 5 22:06:22.462328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:06:22.464428 augenrules[1362]: No rules Aug 5 22:06:22.469334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:06:22.477088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:06:22.480068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:06:22.484194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:06:22.485770 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:06:22.488675 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:06:22.493889 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:06:22.495952 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:06:22.496261 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:06:22.497469 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:06:22.498680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:06:22.499909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:06:22.501268 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:06:22.501396 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:06:22.503275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:06:22.503415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:06:22.506341 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:06:22.506583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:06:22.520034 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 5 22:06:22.525219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:06:22.536059 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:06:22.537351 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:06:22.537419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:06:22.561173 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:06:22.591699 systemd-networkd[1374]: lo: Link UP Aug 5 22:06:22.591707 systemd-networkd[1374]: lo: Gained carrier Aug 5 22:06:22.592422 systemd-networkd[1374]: Enumeration completed Aug 5 22:06:22.592530 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:06:22.597399 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:06:22.597407 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:06:22.600018 systemd-networkd[1374]: eth0: Link UP Aug 5 22:06:22.600027 systemd-networkd[1374]: eth0: Gained carrier Aug 5 22:06:22.600041 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:06:22.605618 systemd-resolved[1309]: Positive Trust Anchors: Aug 5 22:06:22.605639 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:06:22.605671 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:06:22.609631 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:06:22.610820 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:06:22.613230 systemd-resolved[1309]: Defaulting to hostname 'linux'. Aug 5 22:06:22.614384 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:06:22.617168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:06:22.624090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:06:22.625588 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:06:22.627573 systemd[1]: Reached target network.target - Network. Aug 5 22:06:22.628684 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:06:22.629941 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:06:22.630648 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Aug 5 22:06:22.210774 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 22:06:22.225186 systemd-journald[1111]: Time jumped backwards, rotating. Aug 5 22:06:22.210818 systemd-resolved[1309]: Clock change detected. Flushing caches. Aug 5 22:06:22.210825 systemd-timesyncd[1379]: Initial clock synchronization to Mon 2024-08-05 22:06:22.210672 UTC. Aug 5 22:06:22.218799 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:06:22.244695 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:06:22.249673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:06:22.281219 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:06:22.282724 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:06:22.283819 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:06:22.284926 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:06:22.286142 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:06:22.287507 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:06:22.288688 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:06:22.289833 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:06:22.290995 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:06:22.291032 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:06:22.292022 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:06:22.293725 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:06:22.296165 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:06:22.303625 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:06:22.305865 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:06:22.307416 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:06:22.308611 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:06:22.309501 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:06:22.310443 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:06:22.310481 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:06:22.311443 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:06:22.313426 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:06:22.314496 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:06:22.316872 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:06:22.319427 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:06:22.323511 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:06:22.324554 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:06:22.327323 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:06:22.329074 jq[1415]: false Aug 5 22:06:22.331832 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:06:22.335069 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:06:22.338814 dbus-daemon[1414]: [system] SELinux support is enabled Aug 5 22:06:22.342871 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:06:22.345332 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:06:22.345815 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:06:22.348627 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:06:22.349299 extend-filesystems[1416]: Found loop3 Aug 5 22:06:22.349299 extend-filesystems[1416]: Found loop4 Aug 5 22:06:22.351115 extend-filesystems[1416]: Found loop5 Aug 5 22:06:22.351115 extend-filesystems[1416]: Found vda Aug 5 22:06:22.351115 extend-filesystems[1416]: Found vda1 Aug 5 22:06:22.351115 extend-filesystems[1416]: Found vda2 Aug 5 22:06:22.351115 extend-filesystems[1416]: Found vda3 Aug 5 22:06:22.351115 extend-filesystems[1416]: Found usr Aug 5 22:06:22.351115 extend-filesystems[1416]: Found vda4 Aug 5 22:06:22.351115 extend-filesystems[1416]: Found vda6 Aug 5 22:06:22.351115 extend-filesystems[1416]: Found vda7 Aug 5 22:06:22.351115 extend-filesystems[1416]: Found vda9 Aug 5 22:06:22.351115 extend-filesystems[1416]: Checking size of /dev/vda9 Aug 5 22:06:22.351881 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:06:22.353691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:06:22.362066 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:06:22.373478 jq[1432]: true Aug 5 22:06:22.366393 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:06:22.373729 extend-filesystems[1416]: Resized partition /dev/vda9 Aug 5 22:06:22.366554 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:06:22.366840 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:06:22.366988 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:06:22.369936 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:06:22.370076 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:06:22.388193 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:06:22.388229 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:06:22.389533 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:06:22.389570 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:06:22.394545 jq[1438]: true Aug 5 22:06:22.399757 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1337) Aug 5 22:06:22.400452 update_engine[1430]: I0805 22:06:22.400262 1430 main.cc:92] Flatcar Update Engine starting Aug 5 22:06:22.403944 update_engine[1430]: I0805 22:06:22.403906 1430 update_check_scheduler.cc:74] Next update check in 4m57s Aug 5 22:06:22.409383 extend-filesystems[1440]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:06:22.416090 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 22:06:22.416125 tar[1436]: linux-arm64/helm Aug 5 22:06:22.410371 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:06:22.411201 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:06:22.413798 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:06:22.445431 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 22:06:22.462886 extend-filesystems[1440]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:06:22.462886 extend-filesystems[1440]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:06:22.462886 extend-filesystems[1440]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 22:06:22.473509 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Aug 5 22:06:22.463475 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 22:06:22.478695 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:06:22.463840 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:06:22.464773 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:06:22.465242 systemd-logind[1423]: New seat seat0. Aug 5 22:06:22.471183 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:06:22.483770 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:06:22.485367 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:06:22.497686 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:06:22.626162 containerd[1445]: time="2024-08-05T22:06:22.626065174Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:06:22.657689 containerd[1445]: time="2024-08-05T22:06:22.657594894Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:06:22.657761 containerd[1445]: time="2024-08-05T22:06:22.657696894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:06:22.660419 containerd[1445]: time="2024-08-05T22:06:22.660375654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:06:22.660446 containerd[1445]: time="2024-08-05T22:06:22.660418534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:06:22.660683 containerd[1445]: time="2024-08-05T22:06:22.660659614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:06:22.660716 containerd[1445]: time="2024-08-05T22:06:22.660683254Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:06:22.660773 containerd[1445]: time="2024-08-05T22:06:22.660757294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:06:22.660824 containerd[1445]: time="2024-08-05T22:06:22.660807574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:06:22.660846 containerd[1445]: time="2024-08-05T22:06:22.660824334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:06:22.660894 containerd[1445]: time="2024-08-05T22:06:22.660879494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:06:22.661086 containerd[1445]: time="2024-08-05T22:06:22.661065494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:06:22.661122 containerd[1445]: time="2024-08-05T22:06:22.661088934Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:06:22.661122 containerd[1445]: time="2024-08-05T22:06:22.661099534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:06:22.661220 containerd[1445]: time="2024-08-05T22:06:22.661200334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:06:22.661247 containerd[1445]: time="2024-08-05T22:06:22.661218414Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:06:22.661288 containerd[1445]: time="2024-08-05T22:06:22.661272134Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:06:22.661288 containerd[1445]: time="2024-08-05T22:06:22.661286534Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:06:22.665229 containerd[1445]: time="2024-08-05T22:06:22.665203574Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:06:22.665263 containerd[1445]: time="2024-08-05T22:06:22.665235854Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:06:22.665263 containerd[1445]: time="2024-08-05T22:06:22.665249294Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:06:22.665309 containerd[1445]: time="2024-08-05T22:06:22.665278534Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:06:22.665309 containerd[1445]: time="2024-08-05T22:06:22.665293094Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:06:22.665309 containerd[1445]: time="2024-08-05T22:06:22.665304014Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:06:22.665361 containerd[1445]: time="2024-08-05T22:06:22.665315854Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:06:22.665453 containerd[1445]: time="2024-08-05T22:06:22.665432934Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:06:22.665477 containerd[1445]: time="2024-08-05T22:06:22.665454654Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:06:22.665477 containerd[1445]: time="2024-08-05T22:06:22.665469054Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:06:22.665513 containerd[1445]: time="2024-08-05T22:06:22.665482934Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:06:22.665513 containerd[1445]: time="2024-08-05T22:06:22.665497574Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:06:22.665547 containerd[1445]: time="2024-08-05T22:06:22.665513174Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:06:22.665547 containerd[1445]: time="2024-08-05T22:06:22.665527294Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:06:22.665547 containerd[1445]: time="2024-08-05T22:06:22.665539814Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:06:22.665596 containerd[1445]: time="2024-08-05T22:06:22.665552894Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:06:22.665596 containerd[1445]: time="2024-08-05T22:06:22.665566334Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:06:22.665596 containerd[1445]: time="2024-08-05T22:06:22.665578174Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:06:22.665596 containerd[1445]: time="2024-08-05T22:06:22.665589134Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:06:22.665733 containerd[1445]: time="2024-08-05T22:06:22.665713174Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:06:22.666009 containerd[1445]: time="2024-08-05T22:06:22.665990814Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:06:22.666041 containerd[1445]: time="2024-08-05T22:06:22.666019814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666041 containerd[1445]: time="2024-08-05T22:06:22.666034374Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:06:22.666078 containerd[1445]: time="2024-08-05T22:06:22.666055774Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666174454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666191214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666203894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666215934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666228334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666240374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666251654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666262934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666277534Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666400814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666417894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666430214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666442694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666456814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666622 containerd[1445]: time="2024-08-05T22:06:22.666471334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666903 containerd[1445]: time="2024-08-05T22:06:22.666483374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666903 containerd[1445]: time="2024-08-05T22:06:22.666494294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:06:22.666946 containerd[1445]: time="2024-08-05T22:06:22.666757334Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:06:22.666946 containerd[1445]: time="2024-08-05T22:06:22.666815054Z" level=info msg="Connect containerd service" Aug 5 22:06:22.666946 containerd[1445]: time="2024-08-05T22:06:22.666839534Z" level=info msg="using legacy CRI server" Aug 5 22:06:22.666946 containerd[1445]: time="2024-08-05T22:06:22.666845734Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:06:22.667114 containerd[1445]: time="2024-08-05T22:06:22.667008894Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:06:22.667756 containerd[1445]: time="2024-08-05T22:06:22.667702654Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:06:22.667756 containerd[1445]: time="2024-08-05T22:06:22.667753414Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:06:22.667858 containerd[1445]: time="2024-08-05T22:06:22.667770334Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:06:22.667858 containerd[1445]: time="2024-08-05T22:06:22.667780374Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:06:22.667858 containerd[1445]: time="2024-08-05T22:06:22.667792174Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:06:22.668445 containerd[1445]: time="2024-08-05T22:06:22.668185694Z" level=info msg="Start subscribing containerd event" Aug 5 22:06:22.668445 containerd[1445]: time="2024-08-05T22:06:22.668298094Z" level=info msg="Start recovering state" Aug 5 22:06:22.668445 containerd[1445]: time="2024-08-05T22:06:22.668329814Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:06:22.668445 containerd[1445]: time="2024-08-05T22:06:22.668372374Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:06:22.668698 containerd[1445]: time="2024-08-05T22:06:22.668677174Z" level=info msg="Start event monitor" Aug 5 22:06:22.668839 containerd[1445]: time="2024-08-05T22:06:22.668822374Z" level=info msg="Start snapshots syncer" Aug 5 22:06:22.668892 containerd[1445]: time="2024-08-05T22:06:22.668881854Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:06:22.668990 containerd[1445]: time="2024-08-05T22:06:22.668975654Z" level=info msg="Start streaming server" Aug 5 22:06:22.669221 containerd[1445]: time="2024-08-05T22:06:22.669206254Z" level=info msg="containerd successfully booted in 0.043973s" Aug 5 22:06:22.669300 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:06:22.788574 tar[1436]: linux-arm64/LICENSE Aug 5 22:06:22.788574 tar[1436]: linux-arm64/README.md Aug 5 22:06:22.802349 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:06:22.965484 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:06:22.984374 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:06:22.997962 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:06:23.003230 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:06:23.004647 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:06:23.006908 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:06:23.019726 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:06:23.033068 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:06:23.035334 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 22:06:23.036921 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:06:23.341719 systemd-networkd[1374]: eth0: Gained IPv6LL Aug 5 22:06:23.345717 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:06:23.347370 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:06:23.356846 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 22:06:23.359136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:06:23.361180 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:06:23.377264 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 22:06:23.378676 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 22:06:23.381023 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:06:23.394699 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:06:23.840804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:06:23.842417 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:06:23.844954 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:06:23.848787 systemd[1]: Startup finished in 553ms (kernel) + 4.465s (initrd) + 3.136s (userspace) = 8.155s. Aug 5 22:06:24.295780 kubelet[1525]: E0805 22:06:24.295672 1525 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:06:24.298241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:06:24.298383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:06:28.869460 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:06:28.870549 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:34456.service - OpenSSH per-connection server daemon (10.0.0.1:34456). Aug 5 22:06:28.913721 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 34456 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:06:28.917137 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:06:28.925450 systemd-logind[1423]: New session 1 of user core. Aug 5 22:06:28.926415 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:06:28.943879 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:06:28.954663 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:06:28.956933 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:06:28.963807 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:06:29.037712 systemd[1544]: Queued start job for default target default.target. Aug 5 22:06:29.050521 systemd[1544]: Created slice app.slice - User Application Slice. Aug 5 22:06:29.050552 systemd[1544]: Reached target paths.target - Paths. Aug 5 22:06:29.050563 systemd[1544]: Reached target timers.target - Timers. Aug 5 22:06:29.051757 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:06:29.061197 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:06:29.061257 systemd[1544]: Reached target sockets.target - Sockets. Aug 5 22:06:29.061268 systemd[1544]: Reached target basic.target - Basic System. Aug 5 22:06:29.061305 systemd[1544]: Reached target default.target - Main User Target. Aug 5 22:06:29.061329 systemd[1544]: Startup finished in 92ms. Aug 5 22:06:29.061607 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:06:29.062867 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:06:29.120858 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:34468.service - OpenSSH per-connection server daemon (10.0.0.1:34468). Aug 5 22:06:29.152883 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 34468 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:06:29.154109 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:06:29.158002 systemd-logind[1423]: New session 2 of user core. Aug 5 22:06:29.169757 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:06:29.220565 sshd[1555]: pam_unix(sshd:session): session closed for user core Aug 5 22:06:29.228824 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:34468.service: Deactivated successfully. Aug 5 22:06:29.230116 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:06:29.233556 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:06:29.233857 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:34480.service - OpenSSH per-connection server daemon (10.0.0.1:34480). Aug 5 22:06:29.235045 systemd-logind[1423]: Removed session 2. Aug 5 22:06:29.265808 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 34480 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:06:29.267076 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:06:29.271725 systemd-logind[1423]: New session 3 of user core. Aug 5 22:06:29.282773 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:06:29.330871 sshd[1562]: pam_unix(sshd:session): session closed for user core Aug 5 22:06:29.351058 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:34480.service: Deactivated successfully. Aug 5 22:06:29.352489 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:06:29.354643 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:06:29.355673 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:34494.service - OpenSSH per-connection server daemon (10.0.0.1:34494). Aug 5 22:06:29.356385 systemd-logind[1423]: Removed session 3. Aug 5 22:06:29.387601 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 34494 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:06:29.389236 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:06:29.393276 systemd-logind[1423]: New session 4 of user core. Aug 5 22:06:29.400752 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:06:29.451652 sshd[1569]: pam_unix(sshd:session): session closed for user core Aug 5 22:06:29.460820 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:34494.service: Deactivated successfully. Aug 5 22:06:29.462463 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:06:29.463728 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:06:29.465437 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:34508.service - OpenSSH per-connection server daemon (10.0.0.1:34508). Aug 5 22:06:29.466420 systemd-logind[1423]: Removed session 4. Aug 5 22:06:29.497100 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 34508 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:06:29.498229 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:06:29.502059 systemd-logind[1423]: New session 5 of user core. Aug 5 22:06:29.516826 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:06:29.580515 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:06:29.580780 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:06:29.600406 sudo[1579]: pam_unix(sudo:session): session closed for user root Aug 5 22:06:29.602460 sshd[1576]: pam_unix(sshd:session): session closed for user core Aug 5 22:06:29.609128 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:34508.service: Deactivated successfully. Aug 5 22:06:29.610437 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:06:29.612713 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:06:29.613832 systemd[1]: Started sshd@5-10.0.0.62:22-10.0.0.1:34512.service - OpenSSH per-connection server daemon (10.0.0.1:34512). Aug 5 22:06:29.614550 systemd-logind[1423]: Removed session 5. Aug 5 22:06:29.646092 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 34512 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:06:29.647269 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:06:29.651030 systemd-logind[1423]: New session 6 of user core. Aug 5 22:06:29.669769 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:06:29.720985 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:06:29.721229 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:06:29.724558 sudo[1588]: pam_unix(sudo:session): session closed for user root Aug 5 22:06:29.728962 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:06:29.729200 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:06:29.744939 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:06:29.745960 auditctl[1591]: No rules Aug 5 22:06:29.746759 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:06:29.746937 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:06:29.748314 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:06:29.770413 augenrules[1609]: No rules Aug 5 22:06:29.772685 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:06:29.773896 sudo[1587]: pam_unix(sudo:session): session closed for user root Aug 5 22:06:29.775226 sshd[1584]: pam_unix(sshd:session): session closed for user core Aug 5 22:06:29.784828 systemd[1]: sshd@5-10.0.0.62:22-10.0.0.1:34512.service: Deactivated successfully. Aug 5 22:06:29.786215 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:06:29.787336 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:06:29.788314 systemd[1]: Started sshd@6-10.0.0.62:22-10.0.0.1:34526.service - OpenSSH per-connection server daemon (10.0.0.1:34526). Aug 5 22:06:29.789030 systemd-logind[1423]: Removed session 6. Aug 5 22:06:29.819443 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 34526 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:06:29.820495 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:06:29.824284 systemd-logind[1423]: New session 7 of user core. Aug 5 22:06:29.834768 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:06:29.885383 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:06:29.885646 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:06:29.995863 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:06:29.995936 (dockerd)[1630]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:06:30.224704 dockerd[1630]: time="2024-08-05T22:06:30.224652614Z" level=info msg="Starting up" Aug 5 22:06:30.275294 dockerd[1630]: time="2024-08-05T22:06:30.275004854Z" level=info msg="Loading containers: start." Aug 5 22:06:30.359695 kernel: Initializing XFRM netlink socket Aug 5 22:06:30.416340 systemd-networkd[1374]: docker0: Link UP Aug 5 22:06:30.432818 dockerd[1630]: time="2024-08-05T22:06:30.432779774Z" level=info msg="Loading containers: done." Aug 5 22:06:30.485877 dockerd[1630]: time="2024-08-05T22:06:30.485830654Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:06:30.486024 dockerd[1630]: time="2024-08-05T22:06:30.486004174Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:06:30.486158 dockerd[1630]: time="2024-08-05T22:06:30.486130374Z" level=info msg="Daemon has completed initialization" Aug 5 22:06:30.510959 dockerd[1630]: time="2024-08-05T22:06:30.510885374Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:06:30.511176 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:06:30.984874 containerd[1445]: time="2024-08-05T22:06:30.984807134Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\"" Aug 5 22:06:31.242020 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3961220508-merged.mount: Deactivated successfully. Aug 5 22:06:31.662876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752702844.mount: Deactivated successfully. Aug 5 22:06:33.349437 containerd[1445]: time="2024-08-05T22:06:33.349373534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:33.350564 containerd[1445]: time="2024-08-05T22:06:33.350520694Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.3: active requests=0, bytes read=29945894" Aug 5 22:06:33.351668 containerd[1445]: time="2024-08-05T22:06:33.351608054Z" level=info msg="ImageCreate event name:\"sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:33.354126 containerd[1445]: time="2024-08-05T22:06:33.354061374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:33.355283 containerd[1445]: time="2024-08-05T22:06:33.355196214Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.3\" with image id \"sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\", size \"29942692\" in 2.37034416s" Aug 5 22:06:33.355283 containerd[1445]: time="2024-08-05T22:06:33.355234854Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\" returns image reference \"sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca\"" Aug 5 22:06:33.373342 containerd[1445]: time="2024-08-05T22:06:33.373258214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\"" Aug 5 22:06:34.548738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:06:34.565847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:06:34.655947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:06:34.659470 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:06:34.714044 kubelet[1839]: E0805 22:06:34.713946 1839 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:06:34.717199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:06:34.717325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:06:35.723547 containerd[1445]: time="2024-08-05T22:06:35.723492494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:35.724538 containerd[1445]: time="2024-08-05T22:06:35.724499254Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.3: active requests=0, bytes read=26887235" Aug 5 22:06:35.725457 containerd[1445]: time="2024-08-05T22:06:35.725401454Z" level=info msg="ImageCreate event name:\"sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:35.729655 containerd[1445]: time="2024-08-05T22:06:35.729585774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:35.731303 containerd[1445]: time="2024-08-05T22:06:35.731246734Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.3\" with image id \"sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\", size \"28374500\" in 2.357951s" Aug 5 22:06:35.731303 containerd[1445]: time="2024-08-05T22:06:35.731285294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\" returns image reference \"sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a\"" Aug 5 22:06:35.752195 containerd[1445]: time="2024-08-05T22:06:35.752165534Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\"" Aug 5 22:06:38.147748 containerd[1445]: time="2024-08-05T22:06:38.147703134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:38.148639 containerd[1445]: time="2024-08-05T22:06:38.148513574Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.3: active requests=0, bytes read=16153860" Aug 5 22:06:38.149897 containerd[1445]: time="2024-08-05T22:06:38.149844534Z" level=info msg="ImageCreate event name:\"sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:38.152507 containerd[1445]: time="2024-08-05T22:06:38.152450094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:38.153606 containerd[1445]: time="2024-08-05T22:06:38.153578574Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.3\" with image id \"sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\", size \"17641143\" in 2.40137696s" Aug 5 22:06:38.153676 containerd[1445]: time="2024-08-05T22:06:38.153607334Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\" returns image reference \"sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355\"" Aug 5 22:06:38.174019 containerd[1445]: time="2024-08-05T22:06:38.173808894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\"" Aug 5 22:06:39.216310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987220443.mount: Deactivated successfully. Aug 5 22:06:39.470764 containerd[1445]: time="2024-08-05T22:06:39.470627014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:39.471307 containerd[1445]: time="2024-08-05T22:06:39.471259694Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.3: active requests=0, bytes read=25646938" Aug 5 22:06:39.472182 containerd[1445]: time="2024-08-05T22:06:39.472132974Z" level=info msg="ImageCreate event name:\"sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:39.474444 containerd[1445]: time="2024-08-05T22:06:39.474254894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:39.474932 containerd[1445]: time="2024-08-05T22:06:39.474905534Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.3\" with image id \"sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be\", repo tag \"registry.k8s.io/kube-proxy:v1.30.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\", size \"25645955\" in 1.30105716s" Aug 5 22:06:39.475095 containerd[1445]: time="2024-08-05T22:06:39.474993974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\" returns image reference \"sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be\"" Aug 5 22:06:39.493697 containerd[1445]: time="2024-08-05T22:06:39.493661494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:06:40.017907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039014273.mount: Deactivated successfully. Aug 5 22:06:40.844575 containerd[1445]: time="2024-08-05T22:06:40.844524654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:40.845507 containerd[1445]: time="2024-08-05T22:06:40.845287494Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Aug 5 22:06:40.846162 containerd[1445]: time="2024-08-05T22:06:40.846123494Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:40.849715 containerd[1445]: time="2024-08-05T22:06:40.849675534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:40.851593 containerd[1445]: time="2024-08-05T22:06:40.851355814Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.35755528s" Aug 5 22:06:40.851593 containerd[1445]: time="2024-08-05T22:06:40.851398494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Aug 5 22:06:40.870662 containerd[1445]: time="2024-08-05T22:06:40.870631494Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:06:41.296939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981234574.mount: Deactivated successfully. Aug 5 22:06:41.301645 containerd[1445]: time="2024-08-05T22:06:41.301510254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:41.302692 containerd[1445]: time="2024-08-05T22:06:41.302417574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Aug 5 22:06:41.303355 containerd[1445]: time="2024-08-05T22:06:41.303321494Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:41.306414 containerd[1445]: time="2024-08-05T22:06:41.306378174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:41.307141 containerd[1445]: time="2024-08-05T22:06:41.307082934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 436.32788ms" Aug 5 22:06:41.307141 containerd[1445]: time="2024-08-05T22:06:41.307119934Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 22:06:41.324895 containerd[1445]: time="2024-08-05T22:06:41.324869534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Aug 5 22:06:41.879333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869290104.mount: Deactivated successfully. Aug 5 22:06:44.967609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:06:44.980958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:06:45.064002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:06:45.067802 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:06:45.125435 kubelet[1995]: E0805 22:06:45.125334 1995 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:06:45.127925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:06:45.128080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:06:45.422681 containerd[1445]: time="2024-08-05T22:06:45.422550254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:45.423122 containerd[1445]: time="2024-08-05T22:06:45.423087614Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Aug 5 22:06:45.424100 containerd[1445]: time="2024-08-05T22:06:45.424065054Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:45.427316 containerd[1445]: time="2024-08-05T22:06:45.427257134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:06:45.428649 containerd[1445]: time="2024-08-05T22:06:45.428597654Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.1036956s" Aug 5 22:06:45.428718 containerd[1445]: time="2024-08-05T22:06:45.428651854Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Aug 5 22:06:49.668191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:06:49.683815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:06:49.694554 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-7.scope)... Aug 5 22:06:49.694571 systemd[1]: Reloading... Aug 5 22:06:49.762958 zram_generator::config[2122]: No configuration found. Aug 5 22:06:49.998191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:06:50.053003 systemd[1]: Reloading finished in 358 ms. Aug 5 22:06:50.089410 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:06:50.092793 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:06:50.093705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:06:50.095107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:06:50.187711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:06:50.191825 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:06:50.229120 kubelet[2169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:06:50.229120 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:06:50.229120 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:06:50.229463 kubelet[2169]: I0805 22:06:50.229363 2169 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:06:50.749754 kubelet[2169]: I0805 22:06:50.749713 2169 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:06:50.749754 kubelet[2169]: I0805 22:06:50.749743 2169 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:06:50.749966 kubelet[2169]: I0805 22:06:50.749950 2169 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:06:50.791626 kubelet[2169]: I0805 22:06:50.791585 2169 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:06:50.791714 kubelet[2169]: E0805 22:06:50.791692 2169 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.801844 kubelet[2169]: I0805 22:06:50.801812 2169 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:06:50.803076 kubelet[2169]: I0805 22:06:50.803031 2169 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:06:50.803245 kubelet[2169]: I0805 22:06:50.803076 2169 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:06:50.803319 kubelet[2169]: I0805 22:06:50.803302 2169 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:06:50.803319 kubelet[2169]: I0805 22:06:50.803310 2169 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:06:50.803633 kubelet[2169]: I0805 22:06:50.803600 2169 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:06:50.808150 kubelet[2169]: I0805 22:06:50.808125 2169 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:06:50.808179 kubelet[2169]: I0805 22:06:50.808152 2169 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:06:50.808344 kubelet[2169]: I0805 22:06:50.808330 2169 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:06:50.808540 kubelet[2169]: I0805 22:06:50.808526 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:06:50.809650 kubelet[2169]: W0805 22:06:50.809538 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.809650 kubelet[2169]: E0805 22:06:50.809591 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.810324 kubelet[2169]: W0805 22:06:50.810244 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.810324 kubelet[2169]: E0805 22:06:50.810303 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.811034 kubelet[2169]: I0805 22:06:50.810949 2169 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:06:50.811521 kubelet[2169]: I0805 22:06:50.811509 2169 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:06:50.811892 kubelet[2169]: W0805 22:06:50.811872 2169 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:06:50.812936 kubelet[2169]: I0805 22:06:50.812908 2169 server.go:1264] "Started kubelet" Aug 5 22:06:50.813676 kubelet[2169]: I0805 22:06:50.813209 2169 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:06:50.814552 kubelet[2169]: I0805 22:06:50.814528 2169 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:06:50.815126 kubelet[2169]: I0805 22:06:50.815065 2169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:06:50.815633 kubelet[2169]: I0805 22:06:50.815598 2169 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:06:50.815741 kubelet[2169]: E0805 22:06:50.815308 2169 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f46e0394edb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:06:50.812886454 +0000 UTC m=+0.618001841,LastTimestamp:2024-08-05 22:06:50.812886454 +0000 UTC m=+0.618001841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:06:50.816507 kubelet[2169]: I0805 22:06:50.816479 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:06:50.817742 kubelet[2169]: E0805 22:06:50.817704 2169 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:06:50.818798 kubelet[2169]: I0805 22:06:50.817884 2169 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:06:50.818798 kubelet[2169]: I0805 22:06:50.817968 2169 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:06:50.818798 kubelet[2169]: I0805 22:06:50.818036 2169 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:06:50.818798 kubelet[2169]: W0805 22:06:50.818560 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.818798 kubelet[2169]: E0805 22:06:50.818609 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="200ms" Aug 5 22:06:50.819055 kubelet[2169]: E0805 22:06:50.818748 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.819134 kubelet[2169]: I0805 22:06:50.819112 2169 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:06:50.819188 kubelet[2169]: E0805 22:06:50.819172 2169 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:06:50.819294 kubelet[2169]: I0805 22:06:50.819275 2169 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:06:50.820998 kubelet[2169]: I0805 22:06:50.820977 2169 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:06:50.833661 kubelet[2169]: I0805 22:06:50.833280 2169 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:06:50.833661 kubelet[2169]: I0805 22:06:50.833298 2169 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:06:50.833661 kubelet[2169]: I0805 22:06:50.833314 2169 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:06:50.835741 kubelet[2169]: I0805 22:06:50.835719 2169 policy_none.go:49] "None policy: Start" Aug 5 22:06:50.836217 kubelet[2169]: I0805 22:06:50.836202 2169 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:06:50.836627 kubelet[2169]: I0805 22:06:50.836297 2169 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:06:50.836750 kubelet[2169]: I0805 22:06:50.836701 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:06:50.837748 kubelet[2169]: I0805 22:06:50.837714 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:06:50.837823 kubelet[2169]: I0805 22:06:50.837804 2169 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:06:50.837823 kubelet[2169]: I0805 22:06:50.837823 2169 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:06:50.837988 kubelet[2169]: E0805 22:06:50.837860 2169 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:06:50.838381 kubelet[2169]: W0805 22:06:50.838346 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.838494 kubelet[2169]: E0805 22:06:50.838392 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:50.842106 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:06:50.867034 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:06:50.869865 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:06:50.880485 kubelet[2169]: I0805 22:06:50.880233 2169 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:06:50.880485 kubelet[2169]: I0805 22:06:50.880422 2169 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:06:50.881040 kubelet[2169]: I0805 22:06:50.880532 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:06:50.881649 kubelet[2169]: E0805 22:06:50.881505 2169 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 22:06:50.919998 kubelet[2169]: I0805 22:06:50.919978 2169 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:06:50.920508 kubelet[2169]: E0805 22:06:50.920475 2169 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Aug 5 22:06:50.939191 kubelet[2169]: I0805 22:06:50.938561 2169 topology_manager.go:215] "Topology Admit Handler" podUID="de445804273c0aaff376500a430aeffe" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:06:50.939641 kubelet[2169]: I0805 22:06:50.939590 2169 topology_manager.go:215] "Topology Admit Handler" podUID="471a108742c0b3658d07e3bda7ae5d17" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:06:50.940910 kubelet[2169]: I0805 22:06:50.940872 2169 topology_manager.go:215] "Topology Admit Handler" podUID="3b0306f30b5bc847ed1d56b34a56bbaf" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:06:50.946600 systemd[1]: Created slice kubepods-burstable-podde445804273c0aaff376500a430aeffe.slice - libcontainer container kubepods-burstable-podde445804273c0aaff376500a430aeffe.slice. Aug 5 22:06:50.960522 systemd[1]: Created slice kubepods-burstable-pod471a108742c0b3658d07e3bda7ae5d17.slice - libcontainer container kubepods-burstable-pod471a108742c0b3658d07e3bda7ae5d17.slice. Aug 5 22:06:50.969793 systemd[1]: Created slice kubepods-burstable-pod3b0306f30b5bc847ed1d56b34a56bbaf.slice - libcontainer container kubepods-burstable-pod3b0306f30b5bc847ed1d56b34a56bbaf.slice. Aug 5 22:06:51.019440 kubelet[2169]: I0805 22:06:51.019371 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de445804273c0aaff376500a430aeffe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de445804273c0aaff376500a430aeffe\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:06:51.019440 kubelet[2169]: I0805 22:06:51.019405 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de445804273c0aaff376500a430aeffe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de445804273c0aaff376500a430aeffe\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:06:51.019440 kubelet[2169]: I0805 22:06:51.019423 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de445804273c0aaff376500a430aeffe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de445804273c0aaff376500a430aeffe\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:06:51.019440 kubelet[2169]: I0805 22:06:51.019441 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:51.019612 kubelet[2169]: I0805 22:06:51.019457 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:51.019612 kubelet[2169]: I0805 22:06:51.019471 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:51.019612 kubelet[2169]: I0805 22:06:51.019485 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:51.019612 kubelet[2169]: I0805 22:06:51.019498 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:51.019612 kubelet[2169]: I0805 22:06:51.019513 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b0306f30b5bc847ed1d56b34a56bbaf-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3b0306f30b5bc847ed1d56b34a56bbaf\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:06:51.019739 kubelet[2169]: E0805 22:06:51.019534 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="400ms" Aug 5 22:06:51.122256 kubelet[2169]: I0805 22:06:51.122234 2169 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:06:51.122535 kubelet[2169]: E0805 22:06:51.122510 2169 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Aug 5 22:06:51.259332 kubelet[2169]: E0805 22:06:51.259305 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:51.259823 containerd[1445]: time="2024-08-05T22:06:51.259781814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de445804273c0aaff376500a430aeffe,Namespace:kube-system,Attempt:0,}" Aug 5 22:06:51.268210 kubelet[2169]: E0805 22:06:51.268104 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:51.268487 containerd[1445]: time="2024-08-05T22:06:51.268444214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:471a108742c0b3658d07e3bda7ae5d17,Namespace:kube-system,Attempt:0,}" Aug 5 22:06:51.272002 kubelet[2169]: E0805 22:06:51.271738 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:51.272078 containerd[1445]: time="2024-08-05T22:06:51.272031534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3b0306f30b5bc847ed1d56b34a56bbaf,Namespace:kube-system,Attempt:0,}" Aug 5 22:06:51.420540 kubelet[2169]: E0805 22:06:51.420491 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="800ms" Aug 5 22:06:51.524407 kubelet[2169]: I0805 22:06:51.524192 2169 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:06:51.524966 kubelet[2169]: E0805 22:06:51.524924 2169 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Aug 5 22:06:51.688890 kubelet[2169]: W0805 22:06:51.688843 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:51.688890 kubelet[2169]: E0805 22:06:51.688885 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:51.750483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830731242.mount: Deactivated successfully. Aug 5 22:06:51.755836 containerd[1445]: time="2024-08-05T22:06:51.755746214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:06:51.757488 containerd[1445]: time="2024-08-05T22:06:51.757451374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:06:51.758302 containerd[1445]: time="2024-08-05T22:06:51.758254134Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:06:51.760034 containerd[1445]: time="2024-08-05T22:06:51.759994094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:06:51.760121 containerd[1445]: time="2024-08-05T22:06:51.760089694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 5 22:06:51.760680 containerd[1445]: time="2024-08-05T22:06:51.760652774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:06:51.760913 containerd[1445]: time="2024-08-05T22:06:51.760878214Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:06:51.762895 containerd[1445]: time="2024-08-05T22:06:51.762863294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:06:51.764746 containerd[1445]: time="2024-08-05T22:06:51.764706694Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 496.1864ms" Aug 5 22:06:51.771189 containerd[1445]: time="2024-08-05T22:06:51.770980294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.88772ms" Aug 5 22:06:51.771642 containerd[1445]: time="2024-08-05T22:06:51.771576614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.69676ms" Aug 5 22:06:51.838931 kubelet[2169]: W0805 22:06:51.833746 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:51.838931 kubelet[2169]: E0805 22:06:51.833807 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:51.955742 containerd[1445]: time="2024-08-05T22:06:51.955588174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:06:51.955890 containerd[1445]: time="2024-08-05T22:06:51.955724374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:06:51.955890 containerd[1445]: time="2024-08-05T22:06:51.955752374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:06:51.955890 containerd[1445]: time="2024-08-05T22:06:51.955768134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:06:51.956753 containerd[1445]: time="2024-08-05T22:06:51.956673934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:06:51.956753 containerd[1445]: time="2024-08-05T22:06:51.956720654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:06:51.956883 containerd[1445]: time="2024-08-05T22:06:51.956740854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:06:51.956883 containerd[1445]: time="2024-08-05T22:06:51.956754534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:06:51.957223 kubelet[2169]: W0805 22:06:51.957174 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:51.957269 kubelet[2169]: E0805 22:06:51.957233 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:51.958815 containerd[1445]: time="2024-08-05T22:06:51.958736654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:06:51.958815 containerd[1445]: time="2024-08-05T22:06:51.958791574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:06:51.959087 containerd[1445]: time="2024-08-05T22:06:51.958811694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:06:51.959087 containerd[1445]: time="2024-08-05T22:06:51.958828374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:06:51.978836 systemd[1]: Started cri-containerd-51c71ecfcdbf3e63a31de9018d1a73edc16499507fb53d4e6bb2a641723ae093.scope - libcontainer container 51c71ecfcdbf3e63a31de9018d1a73edc16499507fb53d4e6bb2a641723ae093. Aug 5 22:06:51.979898 systemd[1]: Started cri-containerd-819e22e52b55ac060f173cb019ae3bd946857332be7483a868f376764de4fcd7.scope - libcontainer container 819e22e52b55ac060f173cb019ae3bd946857332be7483a868f376764de4fcd7. Aug 5 22:06:51.980898 systemd[1]: Started cri-containerd-dca1dc40f4b5a3b2b5866e34d8656a0e2e83f87eeee7c083155d0dd487a5f5ef.scope - libcontainer container dca1dc40f4b5a3b2b5866e34d8656a0e2e83f87eeee7c083155d0dd487a5f5ef. Aug 5 22:06:52.017929 containerd[1445]: time="2024-08-05T22:06:52.014185934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de445804273c0aaff376500a430aeffe,Namespace:kube-system,Attempt:0,} returns sandbox id \"51c71ecfcdbf3e63a31de9018d1a73edc16499507fb53d4e6bb2a641723ae093\"" Aug 5 22:06:52.019711 kubelet[2169]: E0805 22:06:52.019682 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:52.021767 containerd[1445]: time="2024-08-05T22:06:52.021685254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3b0306f30b5bc847ed1d56b34a56bbaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"dca1dc40f4b5a3b2b5866e34d8656a0e2e83f87eeee7c083155d0dd487a5f5ef\"" Aug 5 22:06:52.022122 kubelet[2169]: E0805 22:06:52.022101 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:52.023254 containerd[1445]: time="2024-08-05T22:06:52.023194294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:471a108742c0b3658d07e3bda7ae5d17,Namespace:kube-system,Attempt:0,} returns sandbox id \"819e22e52b55ac060f173cb019ae3bd946857332be7483a868f376764de4fcd7\"" Aug 5 22:06:52.023877 containerd[1445]: time="2024-08-05T22:06:52.023724014Z" level=info msg="CreateContainer within sandbox \"51c71ecfcdbf3e63a31de9018d1a73edc16499507fb53d4e6bb2a641723ae093\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:06:52.024001 kubelet[2169]: E0805 22:06:52.023752 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:52.025119 containerd[1445]: time="2024-08-05T22:06:52.025091414Z" level=info msg="CreateContainer within sandbox \"dca1dc40f4b5a3b2b5866e34d8656a0e2e83f87eeee7c083155d0dd487a5f5ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:06:52.026215 containerd[1445]: time="2024-08-05T22:06:52.026182574Z" level=info msg="CreateContainer within sandbox \"819e22e52b55ac060f173cb019ae3bd946857332be7483a868f376764de4fcd7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:06:52.038340 containerd[1445]: time="2024-08-05T22:06:52.038298894Z" level=info msg="CreateContainer within sandbox \"51c71ecfcdbf3e63a31de9018d1a73edc16499507fb53d4e6bb2a641723ae093\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a2a0ede04fac8c88627f791a5e82d9bb72e8b6009053d889dbf9e17630cff2b\"" Aug 5 22:06:52.041642 containerd[1445]: time="2024-08-05T22:06:52.040848814Z" level=info msg="StartContainer for \"9a2a0ede04fac8c88627f791a5e82d9bb72e8b6009053d889dbf9e17630cff2b\"" Aug 5 22:06:52.043835 containerd[1445]: time="2024-08-05T22:06:52.043791654Z" level=info msg="CreateContainer within sandbox \"dca1dc40f4b5a3b2b5866e34d8656a0e2e83f87eeee7c083155d0dd487a5f5ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33c78f623835d18c3e86d2fa4650468f9a7f950ee983c409f0d1bbecfd6204de\"" Aug 5 22:06:52.044648 containerd[1445]: time="2024-08-05T22:06:52.044227734Z" level=info msg="StartContainer for \"33c78f623835d18c3e86d2fa4650468f9a7f950ee983c409f0d1bbecfd6204de\"" Aug 5 22:06:52.046823 containerd[1445]: time="2024-08-05T22:06:52.046785094Z" level=info msg="CreateContainer within sandbox \"819e22e52b55ac060f173cb019ae3bd946857332be7483a868f376764de4fcd7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f999da7e27e3d16f3cd0d91a6941d63500326e2ac7636a9ea878cc7047177246\"" Aug 5 22:06:52.047232 containerd[1445]: time="2024-08-05T22:06:52.047184614Z" level=info msg="StartContainer for \"f999da7e27e3d16f3cd0d91a6941d63500326e2ac7636a9ea878cc7047177246\"" Aug 5 22:06:52.052663 kubelet[2169]: W0805 22:06:52.052362 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:52.052663 kubelet[2169]: E0805 22:06:52.052423 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused Aug 5 22:06:52.067776 systemd[1]: Started cri-containerd-9a2a0ede04fac8c88627f791a5e82d9bb72e8b6009053d889dbf9e17630cff2b.scope - libcontainer container 9a2a0ede04fac8c88627f791a5e82d9bb72e8b6009053d889dbf9e17630cff2b. Aug 5 22:06:52.072328 systemd[1]: Started cri-containerd-33c78f623835d18c3e86d2fa4650468f9a7f950ee983c409f0d1bbecfd6204de.scope - libcontainer container 33c78f623835d18c3e86d2fa4650468f9a7f950ee983c409f0d1bbecfd6204de. Aug 5 22:06:52.073203 systemd[1]: Started cri-containerd-f999da7e27e3d16f3cd0d91a6941d63500326e2ac7636a9ea878cc7047177246.scope - libcontainer container f999da7e27e3d16f3cd0d91a6941d63500326e2ac7636a9ea878cc7047177246. Aug 5 22:06:52.105015 containerd[1445]: time="2024-08-05T22:06:52.104808654Z" level=info msg="StartContainer for \"9a2a0ede04fac8c88627f791a5e82d9bb72e8b6009053d889dbf9e17630cff2b\" returns successfully" Aug 5 22:06:52.114248 containerd[1445]: time="2024-08-05T22:06:52.114189374Z" level=info msg="StartContainer for \"f999da7e27e3d16f3cd0d91a6941d63500326e2ac7636a9ea878cc7047177246\" returns successfully" Aug 5 22:06:52.123747 containerd[1445]: time="2024-08-05T22:06:52.123711174Z" level=info msg="StartContainer for \"33c78f623835d18c3e86d2fa4650468f9a7f950ee983c409f0d1bbecfd6204de\" returns successfully" Aug 5 22:06:52.221697 kubelet[2169]: E0805 22:06:52.221636 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="1.6s" Aug 5 22:06:52.326432 kubelet[2169]: I0805 22:06:52.326396 2169 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:06:52.855172 kubelet[2169]: E0805 22:06:52.855139 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:52.855703 kubelet[2169]: E0805 22:06:52.855685 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:52.857780 kubelet[2169]: E0805 22:06:52.857716 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:53.764922 kubelet[2169]: I0805 22:06:53.764813 2169 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:06:53.778132 kubelet[2169]: E0805 22:06:53.778077 2169 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:06:53.859484 kubelet[2169]: E0805 22:06:53.859397 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:53.879218 kubelet[2169]: E0805 22:06:53.879184 2169 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:06:53.980076 kubelet[2169]: E0805 22:06:53.980031 2169 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:06:54.080950 kubelet[2169]: E0805 22:06:54.080550 2169 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:06:54.181099 kubelet[2169]: E0805 22:06:54.181047 2169 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:06:54.281636 kubelet[2169]: E0805 22:06:54.281590 2169 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:06:54.810333 kubelet[2169]: I0805 22:06:54.810301 2169 apiserver.go:52] "Watching apiserver" Aug 5 22:06:54.818805 kubelet[2169]: I0805 22:06:54.818749 2169 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:06:55.607122 kubelet[2169]: E0805 22:06:55.607094 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:55.664314 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Aug 5 22:06:55.664327 systemd[1]: Reloading... Aug 5 22:06:55.719649 zram_generator::config[2480]: No configuration found. Aug 5 22:06:55.799854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:06:55.860945 kubelet[2169]: E0805 22:06:55.860810 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:55.864401 systemd[1]: Reloading finished in 199 ms. Aug 5 22:06:55.894377 kubelet[2169]: I0805 22:06:55.894290 2169 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:06:55.894475 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:06:55.906544 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:06:55.906791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:06:55.915896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:06:56.007439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:06:56.011273 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:06:56.046084 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:06:56.046357 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:06:56.046400 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:06:56.046516 kubelet[2521]: I0805 22:06:56.046488 2521 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:06:56.050966 kubelet[2521]: I0805 22:06:56.050931 2521 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:06:56.050966 kubelet[2521]: I0805 22:06:56.050952 2521 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:06:56.051452 kubelet[2521]: I0805 22:06:56.051103 2521 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:06:56.052383 kubelet[2521]: I0805 22:06:56.052352 2521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:06:56.053813 kubelet[2521]: I0805 22:06:56.053507 2521 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:06:56.059481 kubelet[2521]: I0805 22:06:56.059462 2521 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:06:56.059684 kubelet[2521]: I0805 22:06:56.059660 2521 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:06:56.059829 kubelet[2521]: I0805 22:06:56.059685 2521 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:06:56.059918 kubelet[2521]: I0805 22:06:56.059835 2521 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:06:56.059918 kubelet[2521]: I0805 22:06:56.059843 2521 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:06:56.059918 kubelet[2521]: I0805 22:06:56.059868 2521 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:06:56.059981 kubelet[2521]: I0805 22:06:56.059974 2521 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:06:56.060000 kubelet[2521]: I0805 22:06:56.059987 2521 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:06:56.060663 kubelet[2521]: I0805 22:06:56.060012 2521 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:06:56.060698 kubelet[2521]: I0805 22:06:56.060679 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:06:56.061314 kubelet[2521]: I0805 22:06:56.061081 2521 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:06:56.061314 kubelet[2521]: I0805 22:06:56.061213 2521 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:06:56.061556 kubelet[2521]: I0805 22:06:56.061524 2521 server.go:1264] "Started kubelet" Aug 5 22:06:56.062594 kubelet[2521]: I0805 22:06:56.062514 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:06:56.062797 kubelet[2521]: I0805 22:06:56.062775 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:06:56.062986 kubelet[2521]: I0805 22:06:56.062972 2521 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:06:56.063102 kubelet[2521]: I0805 22:06:56.063085 2521 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:06:56.064487 kubelet[2521]: I0805 22:06:56.064403 2521 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:06:56.064487 kubelet[2521]: I0805 22:06:56.064478 2521 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:06:56.064611 kubelet[2521]: I0805 22:06:56.064594 2521 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:06:56.064731 kubelet[2521]: I0805 22:06:56.064715 2521 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:06:56.065870 kubelet[2521]: E0805 22:06:56.065835 2521 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:06:56.066281 kubelet[2521]: I0805 22:06:56.066264 2521 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:06:56.066436 kubelet[2521]: I0805 22:06:56.066419 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:06:56.067269 kubelet[2521]: I0805 22:06:56.067247 2521 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:06:56.079194 kubelet[2521]: I0805 22:06:56.078959 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:06:56.083559 kubelet[2521]: I0805 22:06:56.083535 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:06:56.083688 kubelet[2521]: I0805 22:06:56.083671 2521 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:06:56.083751 kubelet[2521]: I0805 22:06:56.083743 2521 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:06:56.083843 kubelet[2521]: E0805 22:06:56.083827 2521 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:06:56.115191 kubelet[2521]: I0805 22:06:56.114570 2521 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:06:56.115191 kubelet[2521]: I0805 22:06:56.114587 2521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:06:56.115191 kubelet[2521]: I0805 22:06:56.114630 2521 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:06:56.115191 kubelet[2521]: I0805 22:06:56.114763 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:06:56.115191 kubelet[2521]: I0805 22:06:56.114776 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:06:56.115191 kubelet[2521]: I0805 22:06:56.114793 2521 policy_none.go:49] "None policy: Start" Aug 5 22:06:56.116067 kubelet[2521]: I0805 22:06:56.116012 2521 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:06:56.116067 kubelet[2521]: I0805 22:06:56.116037 2521 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:06:56.116232 kubelet[2521]: I0805 22:06:56.116156 2521 state_mem.go:75] "Updated machine memory state" Aug 5 22:06:56.119703 kubelet[2521]: I0805 22:06:56.119684 2521 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:06:56.119920 kubelet[2521]: I0805 22:06:56.119825 2521 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:06:56.119955 kubelet[2521]: I0805 22:06:56.119933 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:06:56.168188 kubelet[2521]: I0805 22:06:56.168169 2521 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:06:56.174645 kubelet[2521]: I0805 22:06:56.174054 2521 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Aug 5 22:06:56.174645 kubelet[2521]: I0805 22:06:56.174113 2521 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:06:56.184637 kubelet[2521]: I0805 22:06:56.184489 2521 topology_manager.go:215] "Topology Admit Handler" podUID="3b0306f30b5bc847ed1d56b34a56bbaf" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:06:56.184637 kubelet[2521]: I0805 22:06:56.184628 2521 topology_manager.go:215] "Topology Admit Handler" podUID="de445804273c0aaff376500a430aeffe" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:06:56.184738 kubelet[2521]: I0805 22:06:56.184664 2521 topology_manager.go:215] "Topology Admit Handler" podUID="471a108742c0b3658d07e3bda7ae5d17" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:06:56.191356 kubelet[2521]: E0805 22:06:56.191223 2521 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:06:56.365706 kubelet[2521]: I0805 22:06:56.365608 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:56.365706 kubelet[2521]: I0805 22:06:56.365671 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:56.365815 kubelet[2521]: I0805 22:06:56.365715 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:56.365815 kubelet[2521]: I0805 22:06:56.365753 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:56.365815 kubelet[2521]: I0805 22:06:56.365789 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b0306f30b5bc847ed1d56b34a56bbaf-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3b0306f30b5bc847ed1d56b34a56bbaf\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:06:56.365885 kubelet[2521]: I0805 22:06:56.365820 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de445804273c0aaff376500a430aeffe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de445804273c0aaff376500a430aeffe\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:06:56.365885 kubelet[2521]: I0805 22:06:56.365858 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de445804273c0aaff376500a430aeffe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de445804273c0aaff376500a430aeffe\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:06:56.365932 kubelet[2521]: I0805 22:06:56.365890 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de445804273c0aaff376500a430aeffe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de445804273c0aaff376500a430aeffe\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:06:56.365932 kubelet[2521]: I0805 22:06:56.365915 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:06:56.489306 kubelet[2521]: E0805 22:06:56.489202 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:56.489598 kubelet[2521]: E0805 22:06:56.489564 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:56.491820 kubelet[2521]: E0805 22:06:56.491782 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:57.061159 kubelet[2521]: I0805 22:06:57.061064 2521 apiserver.go:52] "Watching apiserver" Aug 5 22:06:57.065292 kubelet[2521]: I0805 22:06:57.065264 2521 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:06:57.103346 kubelet[2521]: E0805 22:06:57.103323 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:57.103992 kubelet[2521]: E0805 22:06:57.103952 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:57.104651 kubelet[2521]: E0805 22:06:57.104502 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:57.118471 kubelet[2521]: I0805 22:06:57.118371 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.118358948 podStartE2EDuration="1.118358948s" podCreationTimestamp="2024-08-05 22:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:06:57.11783739 +0000 UTC m=+1.103496414" watchObservedRunningTime="2024-08-05 22:06:57.118358948 +0000 UTC m=+1.104018012" Aug 5 22:06:57.129548 kubelet[2521]: I0805 22:06:57.129507 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.129497355 podStartE2EDuration="2.129497355s" podCreationTimestamp="2024-08-05 22:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:06:57.123862977 +0000 UTC m=+1.109522041" watchObservedRunningTime="2024-08-05 22:06:57.129497355 +0000 UTC m=+1.115156419" Aug 5 22:06:57.136975 kubelet[2521]: I0805 22:06:57.136935 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.136925307 podStartE2EDuration="1.136925307s" podCreationTimestamp="2024-08-05 22:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:06:57.129624705 +0000 UTC m=+1.115283809" watchObservedRunningTime="2024-08-05 22:06:57.136925307 +0000 UTC m=+1.122584371" Aug 5 22:06:58.104908 kubelet[2521]: E0805 22:06:58.104878 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:58.116676 kubelet[2521]: E0805 22:06:58.116357 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:06:59.411992 kubelet[2521]: E0805 22:06:59.411893 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:00.699771 sudo[1620]: pam_unix(sudo:session): session closed for user root Aug 5 22:07:00.701399 sshd[1617]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:00.704690 systemd[1]: sshd@6-10.0.0.62:22-10.0.0.1:34526.service: Deactivated successfully. Aug 5 22:07:00.706585 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:07:00.707025 systemd[1]: session-7.scope: Consumed 6.241s CPU time, 137.1M memory peak, 0B memory swap peak. Aug 5 22:07:00.708772 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:07:00.709780 systemd-logind[1423]: Removed session 7. Aug 5 22:07:01.303125 kubelet[2521]: E0805 22:07:01.302940 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:07.695297 update_engine[1430]: I0805 22:07:07.694669 1430 update_attempter.cc:509] Updating boot flags... Aug 5 22:07:07.716649 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2620) Aug 5 22:07:07.743740 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2618) Aug 5 22:07:08.123360 kubelet[2521]: E0805 22:07:08.123263 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:09.422439 kubelet[2521]: E0805 22:07:09.422396 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:10.504197 kubelet[2521]: I0805 22:07:10.504158 2521 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:07:10.505077 containerd[1445]: time="2024-08-05T22:07:10.505034940Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:07:10.505344 kubelet[2521]: I0805 22:07:10.505209 2521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:07:10.801302 kubelet[2521]: I0805 22:07:10.800737 2521 topology_manager.go:215] "Topology Admit Handler" podUID="cba6edc9-cc71-4ff9-8a51-a6b87d0b5622" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-wrc2l" Aug 5 22:07:10.809598 systemd[1]: Created slice kubepods-besteffort-podcba6edc9_cc71_4ff9_8a51_a6b87d0b5622.slice - libcontainer container kubepods-besteffort-podcba6edc9_cc71_4ff9_8a51_a6b87d0b5622.slice. Aug 5 22:07:10.963881 kubelet[2521]: I0805 22:07:10.963812 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cba6edc9-cc71-4ff9-8a51-a6b87d0b5622-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-wrc2l\" (UID: \"cba6edc9-cc71-4ff9-8a51-a6b87d0b5622\") " pod="tigera-operator/tigera-operator-76ff79f7fd-wrc2l" Aug 5 22:07:10.964079 kubelet[2521]: I0805 22:07:10.963895 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk97b\" (UniqueName: \"kubernetes.io/projected/cba6edc9-cc71-4ff9-8a51-a6b87d0b5622-kube-api-access-rk97b\") pod \"tigera-operator-76ff79f7fd-wrc2l\" (UID: \"cba6edc9-cc71-4ff9-8a51-a6b87d0b5622\") " pod="tigera-operator/tigera-operator-76ff79f7fd-wrc2l" Aug 5 22:07:11.071771 kubelet[2521]: E0805 22:07:11.071374 2521 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 22:07:11.071771 kubelet[2521]: E0805 22:07:11.071401 2521 projected.go:200] Error preparing data for projected volume kube-api-access-rk97b for pod tigera-operator/tigera-operator-76ff79f7fd-wrc2l: configmap "kube-root-ca.crt" not found Aug 5 22:07:11.071771 kubelet[2521]: E0805 22:07:11.071491 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cba6edc9-cc71-4ff9-8a51-a6b87d0b5622-kube-api-access-rk97b podName:cba6edc9-cc71-4ff9-8a51-a6b87d0b5622 nodeName:}" failed. No retries permitted until 2024-08-05 22:07:11.571439531 +0000 UTC m=+15.557098595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rk97b" (UniqueName: "kubernetes.io/projected/cba6edc9-cc71-4ff9-8a51-a6b87d0b5622-kube-api-access-rk97b") pod "tigera-operator-76ff79f7fd-wrc2l" (UID: "cba6edc9-cc71-4ff9-8a51-a6b87d0b5622") : configmap "kube-root-ca.crt" not found Aug 5 22:07:11.122119 kubelet[2521]: I0805 22:07:11.122080 2521 topology_manager.go:215] "Topology Admit Handler" podUID="e40a5a7c-bf76-4170-b8d1-6f8817f499d5" podNamespace="kube-system" podName="kube-proxy-559m9" Aug 5 22:07:11.132967 systemd[1]: Created slice kubepods-besteffort-pode40a5a7c_bf76_4170_b8d1_6f8817f499d5.slice - libcontainer container kubepods-besteffort-pode40a5a7c_bf76_4170_b8d1_6f8817f499d5.slice. Aug 5 22:07:11.265003 kubelet[2521]: I0805 22:07:11.264904 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e40a5a7c-bf76-4170-b8d1-6f8817f499d5-lib-modules\") pod \"kube-proxy-559m9\" (UID: \"e40a5a7c-bf76-4170-b8d1-6f8817f499d5\") " pod="kube-system/kube-proxy-559m9" Aug 5 22:07:11.265003 kubelet[2521]: I0805 22:07:11.264946 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75275\" (UniqueName: \"kubernetes.io/projected/e40a5a7c-bf76-4170-b8d1-6f8817f499d5-kube-api-access-75275\") pod \"kube-proxy-559m9\" (UID: \"e40a5a7c-bf76-4170-b8d1-6f8817f499d5\") " pod="kube-system/kube-proxy-559m9" Aug 5 22:07:11.265003 kubelet[2521]: I0805 22:07:11.264969 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e40a5a7c-bf76-4170-b8d1-6f8817f499d5-xtables-lock\") pod \"kube-proxy-559m9\" (UID: \"e40a5a7c-bf76-4170-b8d1-6f8817f499d5\") " pod="kube-system/kube-proxy-559m9" Aug 5 22:07:11.265227 kubelet[2521]: I0805 22:07:11.265014 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e40a5a7c-bf76-4170-b8d1-6f8817f499d5-kube-proxy\") pod \"kube-proxy-559m9\" (UID: \"e40a5a7c-bf76-4170-b8d1-6f8817f499d5\") " pod="kube-system/kube-proxy-559m9" Aug 5 22:07:11.311303 kubelet[2521]: E0805 22:07:11.311258 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:11.435802 kubelet[2521]: E0805 22:07:11.435702 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:11.436396 containerd[1445]: time="2024-08-05T22:07:11.436359585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-559m9,Uid:e40a5a7c-bf76-4170-b8d1-6f8817f499d5,Namespace:kube-system,Attempt:0,}" Aug 5 22:07:11.454674 containerd[1445]: time="2024-08-05T22:07:11.454554181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:11.454783 containerd[1445]: time="2024-08-05T22:07:11.454607700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:11.454860 containerd[1445]: time="2024-08-05T22:07:11.454782174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:11.455279 containerd[1445]: time="2024-08-05T22:07:11.455230199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:11.475789 systemd[1]: Started cri-containerd-6d03fa7234930846f98fc6c963eba24f84531f82c2730a8aea72c26881f5e811.scope - libcontainer container 6d03fa7234930846f98fc6c963eba24f84531f82c2730a8aea72c26881f5e811. Aug 5 22:07:11.494636 containerd[1445]: time="2024-08-05T22:07:11.494579254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-559m9,Uid:e40a5a7c-bf76-4170-b8d1-6f8817f499d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d03fa7234930846f98fc6c963eba24f84531f82c2730a8aea72c26881f5e811\"" Aug 5 22:07:11.495295 kubelet[2521]: E0805 22:07:11.495271 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:11.497442 containerd[1445]: time="2024-08-05T22:07:11.497307003Z" level=info msg="CreateContainer within sandbox \"6d03fa7234930846f98fc6c963eba24f84531f82c2730a8aea72c26881f5e811\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:07:11.509829 containerd[1445]: time="2024-08-05T22:07:11.509721911Z" level=info msg="CreateContainer within sandbox \"6d03fa7234930846f98fc6c963eba24f84531f82c2730a8aea72c26881f5e811\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"735c20cf2b42562a51933cc327e1c8216d66fff45399db11084c3a5613fdbd81\"" Aug 5 22:07:11.510646 containerd[1445]: time="2024-08-05T22:07:11.510371610Z" level=info msg="StartContainer for \"735c20cf2b42562a51933cc327e1c8216d66fff45399db11084c3a5613fdbd81\"" Aug 5 22:07:11.538794 systemd[1]: Started cri-containerd-735c20cf2b42562a51933cc327e1c8216d66fff45399db11084c3a5613fdbd81.scope - libcontainer container 735c20cf2b42562a51933cc327e1c8216d66fff45399db11084c3a5613fdbd81. Aug 5 22:07:11.560869 containerd[1445]: time="2024-08-05T22:07:11.560829216Z" level=info msg="StartContainer for \"735c20cf2b42562a51933cc327e1c8216d66fff45399db11084c3a5613fdbd81\" returns successfully" Aug 5 22:07:11.730642 containerd[1445]: time="2024-08-05T22:07:11.730402631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-wrc2l,Uid:cba6edc9-cc71-4ff9-8a51-a6b87d0b5622,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:07:11.749240 containerd[1445]: time="2024-08-05T22:07:11.749152369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:11.749240 containerd[1445]: time="2024-08-05T22:07:11.749217606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:11.749240 containerd[1445]: time="2024-08-05T22:07:11.749231126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:11.749240 containerd[1445]: time="2024-08-05T22:07:11.749240406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:11.771837 systemd[1]: Started cri-containerd-b16150e652501b8865b01f114b55465ac25791a4231d1c1bc7c7d6b812d507f3.scope - libcontainer container b16150e652501b8865b01f114b55465ac25791a4231d1c1bc7c7d6b812d507f3. Aug 5 22:07:11.804026 containerd[1445]: time="2024-08-05T22:07:11.803984950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-wrc2l,Uid:cba6edc9-cc71-4ff9-8a51-a6b87d0b5622,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b16150e652501b8865b01f114b55465ac25791a4231d1c1bc7c7d6b812d507f3\"" Aug 5 22:07:11.806326 containerd[1445]: time="2024-08-05T22:07:11.805671854Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:07:12.130009 kubelet[2521]: E0805 22:07:12.129820 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:12.130009 kubelet[2521]: E0805 22:07:12.129920 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:12.794588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861598658.mount: Deactivated successfully. Aug 5 22:07:14.178597 containerd[1445]: time="2024-08-05T22:07:14.177833443Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:14.178597 containerd[1445]: time="2024-08-05T22:07:14.178211033Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473646" Aug 5 22:07:14.179152 containerd[1445]: time="2024-08-05T22:07:14.179116568Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:14.183157 containerd[1445]: time="2024-08-05T22:07:14.183098659Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:14.184589 containerd[1445]: time="2024-08-05T22:07:14.184540020Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.378830527s" Aug 5 22:07:14.184589 containerd[1445]: time="2024-08-05T22:07:14.184587538Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 22:07:14.190634 containerd[1445]: time="2024-08-05T22:07:14.189279170Z" level=info msg="CreateContainer within sandbox \"b16150e652501b8865b01f114b55465ac25791a4231d1c1bc7c7d6b812d507f3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:07:14.201381 containerd[1445]: time="2024-08-05T22:07:14.201330281Z" level=info msg="CreateContainer within sandbox \"b16150e652501b8865b01f114b55465ac25791a4231d1c1bc7c7d6b812d507f3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2e60533170b1b638670843223671a1e772b1982f2a2fafdc51df79677fc5d6b6\"" Aug 5 22:07:14.202312 containerd[1445]: time="2024-08-05T22:07:14.201934784Z" level=info msg="StartContainer for \"2e60533170b1b638670843223671a1e772b1982f2a2fafdc51df79677fc5d6b6\"" Aug 5 22:07:14.225790 systemd[1]: Started cri-containerd-2e60533170b1b638670843223671a1e772b1982f2a2fafdc51df79677fc5d6b6.scope - libcontainer container 2e60533170b1b638670843223671a1e772b1982f2a2fafdc51df79677fc5d6b6. Aug 5 22:07:14.259095 containerd[1445]: time="2024-08-05T22:07:14.258980025Z" level=info msg="StartContainer for \"2e60533170b1b638670843223671a1e772b1982f2a2fafdc51df79677fc5d6b6\" returns successfully" Aug 5 22:07:15.149993 kubelet[2521]: I0805 22:07:15.149688 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-559m9" podStartSLOduration=4.149669573 podStartE2EDuration="4.149669573s" podCreationTimestamp="2024-08-05 22:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:07:12.139773138 +0000 UTC m=+16.125432202" watchObservedRunningTime="2024-08-05 22:07:15.149669573 +0000 UTC m=+19.135328637" Aug 5 22:07:18.319323 kubelet[2521]: I0805 22:07:18.319256 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-wrc2l" podStartSLOduration=5.938355144 podStartE2EDuration="8.319235452s" podCreationTimestamp="2024-08-05 22:07:10 +0000 UTC" firstStartedPulling="2024-08-05 22:07:11.805207629 +0000 UTC m=+15.790866693" lastFinishedPulling="2024-08-05 22:07:14.186087937 +0000 UTC m=+18.171747001" observedRunningTime="2024-08-05 22:07:15.149886207 +0000 UTC m=+19.135545271" watchObservedRunningTime="2024-08-05 22:07:18.319235452 +0000 UTC m=+22.304894516" Aug 5 22:07:18.328905 kubelet[2521]: I0805 22:07:18.328857 2521 topology_manager.go:215] "Topology Admit Handler" podUID="d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b" podNamespace="calico-system" podName="calico-typha-66d56f5885-qvw7h" Aug 5 22:07:18.341221 systemd[1]: Created slice kubepods-besteffort-podd0a59f4a_2bcb_461a_b8a5_5b73b5b0c18b.slice - libcontainer container kubepods-besteffort-podd0a59f4a_2bcb_461a_b8a5_5b73b5b0c18b.slice. Aug 5 22:07:18.362530 kubelet[2521]: I0805 22:07:18.361836 2521 topology_manager.go:215] "Topology Admit Handler" podUID="c79f8b20-33f8-438e-ad92-787d282c67f5" podNamespace="calico-system" podName="calico-node-sj6b7" Aug 5 22:07:18.370664 systemd[1]: Created slice kubepods-besteffort-podc79f8b20_33f8_438e_ad92_787d282c67f5.slice - libcontainer container kubepods-besteffort-podc79f8b20_33f8_438e_ad92_787d282c67f5.slice. Aug 5 22:07:18.458824 kubelet[2521]: I0805 22:07:18.458303 2521 topology_manager.go:215] "Topology Admit Handler" podUID="d42a17fc-6962-44ab-95c8-1eda8d16487a" podNamespace="calico-system" podName="csi-node-driver-slqdf" Aug 5 22:07:18.460036 kubelet[2521]: E0805 22:07:18.459995 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slqdf" podUID="d42a17fc-6962-44ab-95c8-1eda8d16487a" Aug 5 22:07:18.513465 kubelet[2521]: I0805 22:07:18.513427 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c79f8b20-33f8-438e-ad92-787d282c67f5-node-certs\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513465 kubelet[2521]: I0805 22:07:18.513468 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-policysync\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513701 kubelet[2521]: I0805 22:07:18.513484 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-cni-log-dir\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513701 kubelet[2521]: I0805 22:07:18.513504 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-lib-modules\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513701 kubelet[2521]: I0805 22:07:18.513524 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-flexvol-driver-host\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513701 kubelet[2521]: I0805 22:07:18.513543 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-var-run-calico\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513701 kubelet[2521]: I0805 22:07:18.513557 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgph\" (UniqueName: \"kubernetes.io/projected/c79f8b20-33f8-438e-ad92-787d282c67f5-kube-api-access-stgph\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513825 kubelet[2521]: I0805 22:07:18.513574 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c79f8b20-33f8-438e-ad92-787d282c67f5-tigera-ca-bundle\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513825 kubelet[2521]: I0805 22:07:18.513592 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-var-lib-calico\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513825 kubelet[2521]: I0805 22:07:18.513609 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b-tigera-ca-bundle\") pod \"calico-typha-66d56f5885-qvw7h\" (UID: \"d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b\") " pod="calico-system/calico-typha-66d56f5885-qvw7h" Aug 5 22:07:18.513825 kubelet[2521]: I0805 22:07:18.513635 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slcf2\" (UniqueName: \"kubernetes.io/projected/d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b-kube-api-access-slcf2\") pod \"calico-typha-66d56f5885-qvw7h\" (UID: \"d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b\") " pod="calico-system/calico-typha-66d56f5885-qvw7h" Aug 5 22:07:18.513825 kubelet[2521]: I0805 22:07:18.513652 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-cni-bin-dir\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513930 kubelet[2521]: I0805 22:07:18.513673 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-xtables-lock\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.513930 kubelet[2521]: I0805 22:07:18.513690 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b-typha-certs\") pod \"calico-typha-66d56f5885-qvw7h\" (UID: \"d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b\") " pod="calico-system/calico-typha-66d56f5885-qvw7h" Aug 5 22:07:18.513930 kubelet[2521]: I0805 22:07:18.513765 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c79f8b20-33f8-438e-ad92-787d282c67f5-cni-net-dir\") pod \"calico-node-sj6b7\" (UID: \"c79f8b20-33f8-438e-ad92-787d282c67f5\") " pod="calico-system/calico-node-sj6b7" Aug 5 22:07:18.615678 kubelet[2521]: I0805 22:07:18.614770 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d42a17fc-6962-44ab-95c8-1eda8d16487a-kubelet-dir\") pod \"csi-node-driver-slqdf\" (UID: \"d42a17fc-6962-44ab-95c8-1eda8d16487a\") " pod="calico-system/csi-node-driver-slqdf" Aug 5 22:07:18.615678 kubelet[2521]: I0805 22:07:18.614845 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d42a17fc-6962-44ab-95c8-1eda8d16487a-varrun\") pod \"csi-node-driver-slqdf\" (UID: \"d42a17fc-6962-44ab-95c8-1eda8d16487a\") " pod="calico-system/csi-node-driver-slqdf" Aug 5 22:07:18.615678 kubelet[2521]: I0805 22:07:18.614882 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d42a17fc-6962-44ab-95c8-1eda8d16487a-registration-dir\") pod \"csi-node-driver-slqdf\" (UID: \"d42a17fc-6962-44ab-95c8-1eda8d16487a\") " pod="calico-system/csi-node-driver-slqdf" Aug 5 22:07:18.615678 kubelet[2521]: I0805 22:07:18.614932 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d42a17fc-6962-44ab-95c8-1eda8d16487a-socket-dir\") pod \"csi-node-driver-slqdf\" (UID: \"d42a17fc-6962-44ab-95c8-1eda8d16487a\") " pod="calico-system/csi-node-driver-slqdf" Aug 5 22:07:18.615678 kubelet[2521]: I0805 22:07:18.614950 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbt8p\" (UniqueName: \"kubernetes.io/projected/d42a17fc-6962-44ab-95c8-1eda8d16487a-kube-api-access-rbt8p\") pod \"csi-node-driver-slqdf\" (UID: \"d42a17fc-6962-44ab-95c8-1eda8d16487a\") " pod="calico-system/csi-node-driver-slqdf" Aug 5 22:07:18.621470 kubelet[2521]: E0805 22:07:18.621436 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.621470 kubelet[2521]: W0805 22:07:18.621468 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.621890 kubelet[2521]: E0805 22:07:18.621490 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.623971 kubelet[2521]: E0805 22:07:18.623948 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.624002 kubelet[2521]: W0805 22:07:18.623979 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.624002 kubelet[2521]: E0805 22:07:18.623996 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.634711 kubelet[2521]: E0805 22:07:18.629702 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.634711 kubelet[2521]: W0805 22:07:18.629720 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.634711 kubelet[2521]: E0805 22:07:18.629742 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.634711 kubelet[2521]: E0805 22:07:18.629926 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.634711 kubelet[2521]: W0805 22:07:18.629934 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.634711 kubelet[2521]: E0805 22:07:18.629950 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.634711 kubelet[2521]: E0805 22:07:18.630137 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.634711 kubelet[2521]: W0805 22:07:18.630146 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.634711 kubelet[2521]: E0805 22:07:18.630154 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.635530 kubelet[2521]: E0805 22:07:18.635506 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.635530 kubelet[2521]: W0805 22:07:18.635522 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.635530 kubelet[2521]: E0805 22:07:18.635535 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.649984 kubelet[2521]: E0805 22:07:18.649885 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:18.651289 containerd[1445]: time="2024-08-05T22:07:18.650882809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66d56f5885-qvw7h,Uid:d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b,Namespace:calico-system,Attempt:0,}" Aug 5 22:07:18.675087 kubelet[2521]: E0805 22:07:18.675044 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:18.676077 containerd[1445]: time="2024-08-05T22:07:18.675751924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sj6b7,Uid:c79f8b20-33f8-438e-ad92-787d282c67f5,Namespace:calico-system,Attempt:0,}" Aug 5 22:07:18.681354 containerd[1445]: time="2024-08-05T22:07:18.681156250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:18.681354 containerd[1445]: time="2024-08-05T22:07:18.681215808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:18.681354 containerd[1445]: time="2024-08-05T22:07:18.681229408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:18.681354 containerd[1445]: time="2024-08-05T22:07:18.681244528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:18.704846 systemd[1]: Started cri-containerd-f4dd557d9ec1ed2a6e8cf8e4f703ba68e09434e612e6e466ed63cf45d1050f1e.scope - libcontainer container f4dd557d9ec1ed2a6e8cf8e4f703ba68e09434e612e6e466ed63cf45d1050f1e. Aug 5 22:07:18.716639 kubelet[2521]: E0805 22:07:18.716150 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.716639 kubelet[2521]: W0805 22:07:18.716175 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.716639 kubelet[2521]: E0805 22:07:18.716196 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.716639 kubelet[2521]: E0805 22:07:18.716444 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.716639 kubelet[2521]: W0805 22:07:18.716456 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.716639 kubelet[2521]: E0805 22:07:18.716674 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.716976 kubelet[2521]: E0805 22:07:18.716924 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.716976 kubelet[2521]: W0805 22:07:18.716941 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.716976 kubelet[2521]: E0805 22:07:18.716958 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.717660 kubelet[2521]: E0805 22:07:18.717172 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.717660 kubelet[2521]: W0805 22:07:18.717185 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.717660 kubelet[2521]: E0805 22:07:18.717193 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.717660 kubelet[2521]: E0805 22:07:18.717380 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.717660 kubelet[2521]: W0805 22:07:18.717393 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.717660 kubelet[2521]: E0805 22:07:18.717401 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.718116 kubelet[2521]: E0805 22:07:18.717838 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.718116 kubelet[2521]: W0805 22:07:18.717849 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.718116 kubelet[2521]: E0805 22:07:18.717860 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.718383 kubelet[2521]: E0805 22:07:18.718358 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.718544 kubelet[2521]: W0805 22:07:18.718472 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.718544 kubelet[2521]: E0805 22:07:18.718489 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.719025 kubelet[2521]: E0805 22:07:18.718895 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.719025 kubelet[2521]: W0805 22:07:18.718907 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.719025 kubelet[2521]: E0805 22:07:18.718917 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.719496 kubelet[2521]: E0805 22:07:18.719432 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.719496 kubelet[2521]: W0805 22:07:18.719445 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.719496 kubelet[2521]: E0805 22:07:18.719455 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.719842 kubelet[2521]: E0805 22:07:18.719785 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.719842 kubelet[2521]: W0805 22:07:18.719797 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.719842 kubelet[2521]: E0805 22:07:18.719807 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.720198 kubelet[2521]: E0805 22:07:18.720106 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.720198 kubelet[2521]: W0805 22:07:18.720118 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.720198 kubelet[2521]: E0805 22:07:18.720174 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.720469 kubelet[2521]: E0805 22:07:18.720373 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.720469 kubelet[2521]: W0805 22:07:18.720383 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.720943 kubelet[2521]: E0805 22:07:18.720713 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.721354 kubelet[2521]: E0805 22:07:18.721336 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.721433 kubelet[2521]: W0805 22:07:18.721422 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.721592 kubelet[2521]: E0805 22:07:18.721567 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.721836 containerd[1445]: time="2024-08-05T22:07:18.721270603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:18.722045 kubelet[2521]: E0805 22:07:18.721994 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.722045 kubelet[2521]: W0805 22:07:18.722009 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.722045 kubelet[2521]: E0805 22:07:18.722041 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.722603 containerd[1445]: time="2024-08-05T22:07:18.722297061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:18.722603 containerd[1445]: time="2024-08-05T22:07:18.722326980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:18.722603 containerd[1445]: time="2024-08-05T22:07:18.722341580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:18.722789 kubelet[2521]: E0805 22:07:18.722505 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.722789 kubelet[2521]: W0805 22:07:18.722517 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.722789 kubelet[2521]: E0805 22:07:18.722549 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.723004 kubelet[2521]: E0805 22:07:18.722915 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.723004 kubelet[2521]: W0805 22:07:18.722928 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.723004 kubelet[2521]: E0805 22:07:18.722959 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.723156 kubelet[2521]: E0805 22:07:18.723145 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.723213 kubelet[2521]: W0805 22:07:18.723203 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.723316 kubelet[2521]: E0805 22:07:18.723288 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.723525 kubelet[2521]: E0805 22:07:18.723510 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.723600 kubelet[2521]: W0805 22:07:18.723587 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.723948 kubelet[2521]: E0805 22:07:18.723882 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.723948 kubelet[2521]: W0805 22:07:18.723894 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.724016 kubelet[2521]: E0805 22:07:18.724001 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.724040 kubelet[2521]: E0805 22:07:18.724022 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.724238 kubelet[2521]: E0805 22:07:18.724164 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.724238 kubelet[2521]: W0805 22:07:18.724176 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.724238 kubelet[2521]: E0805 22:07:18.724187 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.725963 kubelet[2521]: E0805 22:07:18.725934 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.725963 kubelet[2521]: W0805 22:07:18.725952 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.726050 kubelet[2521]: E0805 22:07:18.725972 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.726699 kubelet[2521]: E0805 22:07:18.726673 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.726699 kubelet[2521]: W0805 22:07:18.726691 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.726870 kubelet[2521]: E0805 22:07:18.726825 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.727169 kubelet[2521]: E0805 22:07:18.727122 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.727202 kubelet[2521]: W0805 22:07:18.727172 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.727343 kubelet[2521]: E0805 22:07:18.727326 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.727485 kubelet[2521]: E0805 22:07:18.727471 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.727485 kubelet[2521]: W0805 22:07:18.727482 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.727548 kubelet[2521]: E0805 22:07:18.727494 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.728727 kubelet[2521]: E0805 22:07:18.728676 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.728727 kubelet[2521]: W0805 22:07:18.728690 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.728727 kubelet[2521]: E0805 22:07:18.728702 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.741569 kubelet[2521]: E0805 22:07:18.741542 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:18.741569 kubelet[2521]: W0805 22:07:18.741566 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:18.741697 kubelet[2521]: E0805 22:07:18.741585 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:18.742841 systemd[1]: Started cri-containerd-17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167.scope - libcontainer container 17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167. Aug 5 22:07:18.767801 containerd[1445]: time="2024-08-05T22:07:18.767743061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66d56f5885-qvw7h,Uid:d0a59f4a-2bcb-461a-b8a5-5b73b5b0c18b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4dd557d9ec1ed2a6e8cf8e4f703ba68e09434e612e6e466ed63cf45d1050f1e\"" Aug 5 22:07:18.770666 kubelet[2521]: E0805 22:07:18.770491 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:18.774206 containerd[1445]: time="2024-08-05T22:07:18.774170046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:07:18.775306 containerd[1445]: time="2024-08-05T22:07:18.775275382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sj6b7,Uid:c79f8b20-33f8-438e-ad92-787d282c67f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167\"" Aug 5 22:07:18.777440 kubelet[2521]: E0805 22:07:18.777376 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:20.084323 kubelet[2521]: E0805 22:07:20.084279 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slqdf" podUID="d42a17fc-6962-44ab-95c8-1eda8d16487a" Aug 5 22:07:20.329868 containerd[1445]: time="2024-08-05T22:07:20.329821920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:20.330590 containerd[1445]: time="2024-08-05T22:07:20.330409229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 22:07:20.331283 containerd[1445]: time="2024-08-05T22:07:20.331238254Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:20.333858 containerd[1445]: time="2024-08-05T22:07:20.333824606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:20.334893 containerd[1445]: time="2024-08-05T22:07:20.334799228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.560586983s" Aug 5 22:07:20.334893 containerd[1445]: time="2024-08-05T22:07:20.334829467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 22:07:20.336506 containerd[1445]: time="2024-08-05T22:07:20.336480116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:07:20.342455 containerd[1445]: time="2024-08-05T22:07:20.342416526Z" level=info msg="CreateContainer within sandbox \"f4dd557d9ec1ed2a6e8cf8e4f703ba68e09434e612e6e466ed63cf45d1050f1e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:07:20.354076 containerd[1445]: time="2024-08-05T22:07:20.354032511Z" level=info msg="CreateContainer within sandbox \"f4dd557d9ec1ed2a6e8cf8e4f703ba68e09434e612e6e466ed63cf45d1050f1e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"37380abcc7f9a0671962f3f19d6df292a8f2eecc3a8a189269d49fc2eeb117f8\"" Aug 5 22:07:20.354488 containerd[1445]: time="2024-08-05T22:07:20.354456263Z" level=info msg="StartContainer for \"37380abcc7f9a0671962f3f19d6df292a8f2eecc3a8a189269d49fc2eeb117f8\"" Aug 5 22:07:20.380280 systemd[1]: Started cri-containerd-37380abcc7f9a0671962f3f19d6df292a8f2eecc3a8a189269d49fc2eeb117f8.scope - libcontainer container 37380abcc7f9a0671962f3f19d6df292a8f2eecc3a8a189269d49fc2eeb117f8. Aug 5 22:07:20.410570 containerd[1445]: time="2024-08-05T22:07:20.410390345Z" level=info msg="StartContainer for \"37380abcc7f9a0671962f3f19d6df292a8f2eecc3a8a189269d49fc2eeb117f8\" returns successfully" Aug 5 22:07:21.165955 kubelet[2521]: E0805 22:07:21.163944 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:21.241290 kubelet[2521]: E0805 22:07:21.241256 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.241290 kubelet[2521]: W0805 22:07:21.241280 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.241290 kubelet[2521]: E0805 22:07:21.241298 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.241823 kubelet[2521]: E0805 22:07:21.241501 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.241823 kubelet[2521]: W0805 22:07:21.241514 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.241823 kubelet[2521]: E0805 22:07:21.241522 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.241823 kubelet[2521]: E0805 22:07:21.241756 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.241823 kubelet[2521]: W0805 22:07:21.241766 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.241823 kubelet[2521]: E0805 22:07:21.241776 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.242055 kubelet[2521]: E0805 22:07:21.241999 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.242055 kubelet[2521]: W0805 22:07:21.242009 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.242055 kubelet[2521]: E0805 22:07:21.242018 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.242055 kubelet[2521]: E0805 22:07:21.242209 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.242055 kubelet[2521]: W0805 22:07:21.242216 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.242055 kubelet[2521]: E0805 22:07:21.242223 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.242055 kubelet[2521]: E0805 22:07:21.242373 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.242055 kubelet[2521]: W0805 22:07:21.242380 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.242055 kubelet[2521]: E0805 22:07:21.242387 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.242055 kubelet[2521]: E0805 22:07:21.242527 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.243500 kubelet[2521]: W0805 22:07:21.242534 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.243500 kubelet[2521]: E0805 22:07:21.242541 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.243500 kubelet[2521]: E0805 22:07:21.242691 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.243500 kubelet[2521]: W0805 22:07:21.242706 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.243500 kubelet[2521]: E0805 22:07:21.242715 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.243500 kubelet[2521]: E0805 22:07:21.242849 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.243500 kubelet[2521]: W0805 22:07:21.242856 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.243500 kubelet[2521]: E0805 22:07:21.242863 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.243500 kubelet[2521]: E0805 22:07:21.242979 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.243500 kubelet[2521]: W0805 22:07:21.242985 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.244065 kubelet[2521]: E0805 22:07:21.242992 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.244065 kubelet[2521]: E0805 22:07:21.243189 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.244065 kubelet[2521]: W0805 22:07:21.243199 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.244065 kubelet[2521]: E0805 22:07:21.243209 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.244065 kubelet[2521]: E0805 22:07:21.243378 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.244065 kubelet[2521]: W0805 22:07:21.243387 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.244065 kubelet[2521]: E0805 22:07:21.243470 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.244065 kubelet[2521]: E0805 22:07:21.243726 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.244065 kubelet[2521]: W0805 22:07:21.243737 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.244065 kubelet[2521]: E0805 22:07:21.243746 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.244310 kubelet[2521]: E0805 22:07:21.244080 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.244310 kubelet[2521]: W0805 22:07:21.244091 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.244310 kubelet[2521]: E0805 22:07:21.244101 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.244310 kubelet[2521]: E0805 22:07:21.244271 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.244310 kubelet[2521]: W0805 22:07:21.244277 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.244310 kubelet[2521]: E0805 22:07:21.244285 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.336528 kubelet[2521]: E0805 22:07:21.336507 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.336528 kubelet[2521]: W0805 22:07:21.336525 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.336673 kubelet[2521]: E0805 22:07:21.336540 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.337037 kubelet[2521]: E0805 22:07:21.336984 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.337037 kubelet[2521]: W0805 22:07:21.337032 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.337132 kubelet[2521]: E0805 22:07:21.337050 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.338070 kubelet[2521]: E0805 22:07:21.338053 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.338070 kubelet[2521]: W0805 22:07:21.338068 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.338175 kubelet[2521]: E0805 22:07:21.338085 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.338306 kubelet[2521]: E0805 22:07:21.338293 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.338404 kubelet[2521]: W0805 22:07:21.338306 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.338404 kubelet[2521]: E0805 22:07:21.338362 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.338971 kubelet[2521]: E0805 22:07:21.338955 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.338971 kubelet[2521]: W0805 22:07:21.338970 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.339084 kubelet[2521]: E0805 22:07:21.339028 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.339208 kubelet[2521]: E0805 22:07:21.339184 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.339208 kubelet[2521]: W0805 22:07:21.339197 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.339397 kubelet[2521]: E0805 22:07:21.339269 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.339489 kubelet[2521]: E0805 22:07:21.339475 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.339489 kubelet[2521]: W0805 22:07:21.339488 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.339610 kubelet[2521]: E0805 22:07:21.339581 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.339954 kubelet[2521]: E0805 22:07:21.339863 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.339954 kubelet[2521]: W0805 22:07:21.339876 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.339954 kubelet[2521]: E0805 22:07:21.339894 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.340235 kubelet[2521]: E0805 22:07:21.340148 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.340235 kubelet[2521]: W0805 22:07:21.340159 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.340235 kubelet[2521]: E0805 22:07:21.340175 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.340649 kubelet[2521]: E0805 22:07:21.340488 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.340649 kubelet[2521]: W0805 22:07:21.340500 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.340649 kubelet[2521]: E0805 22:07:21.340524 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.340801 kubelet[2521]: E0805 22:07:21.340781 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.340801 kubelet[2521]: W0805 22:07:21.340796 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.340860 kubelet[2521]: E0805 22:07:21.340812 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.341124 kubelet[2521]: E0805 22:07:21.341067 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.341124 kubelet[2521]: W0805 22:07:21.341080 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.341220 kubelet[2521]: E0805 22:07:21.341113 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.341252 kubelet[2521]: E0805 22:07:21.341220 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.341252 kubelet[2521]: W0805 22:07:21.341227 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.341252 kubelet[2521]: E0805 22:07:21.341238 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.341489 kubelet[2521]: E0805 22:07:21.341476 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.341489 kubelet[2521]: W0805 22:07:21.341488 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.341546 kubelet[2521]: E0805 22:07:21.341499 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.341902 kubelet[2521]: E0805 22:07:21.341888 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.341934 kubelet[2521]: W0805 22:07:21.341902 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.341934 kubelet[2521]: E0805 22:07:21.341913 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.342153 kubelet[2521]: E0805 22:07:21.342139 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.342153 kubelet[2521]: W0805 22:07:21.342151 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.342208 kubelet[2521]: E0805 22:07:21.342165 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.342676 kubelet[2521]: E0805 22:07:21.342663 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.342676 kubelet[2521]: W0805 22:07:21.342676 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.342847 kubelet[2521]: E0805 22:07:21.342762 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.342916 kubelet[2521]: E0805 22:07:21.342903 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:07:21.342916 kubelet[2521]: W0805 22:07:21.342914 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:07:21.342966 kubelet[2521]: E0805 22:07:21.342926 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:07:21.616861 containerd[1445]: time="2024-08-05T22:07:21.616388958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:21.617987 containerd[1445]: time="2024-08-05T22:07:21.617945611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 22:07:21.619026 containerd[1445]: time="2024-08-05T22:07:21.618517681Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:21.620787 containerd[1445]: time="2024-08-05T22:07:21.620755162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:21.622027 containerd[1445]: time="2024-08-05T22:07:21.621997260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.285314707s" Aug 5 22:07:21.622139 containerd[1445]: time="2024-08-05T22:07:21.622110498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 22:07:21.625786 containerd[1445]: time="2024-08-05T22:07:21.625650157Z" level=info msg="CreateContainer within sandbox \"17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:07:21.637332 containerd[1445]: time="2024-08-05T22:07:21.637272434Z" level=info msg="CreateContainer within sandbox \"17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749\"" Aug 5 22:07:21.637975 containerd[1445]: time="2024-08-05T22:07:21.637853224Z" level=info msg="StartContainer for \"08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749\"" Aug 5 22:07:21.664759 systemd[1]: Started cri-containerd-08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749.scope - libcontainer container 08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749. Aug 5 22:07:21.688535 containerd[1445]: time="2024-08-05T22:07:21.688476263Z" level=info msg="StartContainer for \"08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749\" returns successfully" Aug 5 22:07:21.718392 systemd[1]: cri-containerd-08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749.scope: Deactivated successfully. Aug 5 22:07:21.736319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749-rootfs.mount: Deactivated successfully. Aug 5 22:07:21.753208 containerd[1445]: time="2024-08-05T22:07:21.752998381Z" level=info msg="shim disconnected" id=08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749 namespace=k8s.io Aug 5 22:07:21.753208 containerd[1445]: time="2024-08-05T22:07:21.753051140Z" level=warning msg="cleaning up after shim disconnected" id=08a03ff20fab9ac7f48298d65024b35af22b6c7d004d2e1f2e39ed4b57bd0749 namespace=k8s.io Aug 5 22:07:21.753208 containerd[1445]: time="2024-08-05T22:07:21.753058980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:07:22.084068 kubelet[2521]: E0805 22:07:22.084035 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slqdf" podUID="d42a17fc-6962-44ab-95c8-1eda8d16487a" Aug 5 22:07:22.166708 kubelet[2521]: I0805 22:07:22.166334 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:07:22.166708 kubelet[2521]: E0805 22:07:22.166649 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:22.167716 kubelet[2521]: E0805 22:07:22.167403 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:22.170479 containerd[1445]: time="2024-08-05T22:07:22.168854489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:07:22.182243 kubelet[2521]: I0805 22:07:22.181917 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66d56f5885-qvw7h" podStartSLOduration=2.620046718 podStartE2EDuration="4.181903156s" podCreationTimestamp="2024-08-05 22:07:18 +0000 UTC" firstStartedPulling="2024-08-05 22:07:18.773886412 +0000 UTC m=+22.759545476" lastFinishedPulling="2024-08-05 22:07:20.33574285 +0000 UTC m=+24.321401914" observedRunningTime="2024-08-05 22:07:21.188865276 +0000 UTC m=+25.174524300" watchObservedRunningTime="2024-08-05 22:07:22.181903156 +0000 UTC m=+26.167562220" Aug 5 22:07:23.847065 kubelet[2521]: I0805 22:07:23.846912 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:07:23.848481 kubelet[2521]: E0805 22:07:23.848158 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:24.084469 kubelet[2521]: E0805 22:07:24.084414 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slqdf" podUID="d42a17fc-6962-44ab-95c8-1eda8d16487a" Aug 5 22:07:24.170529 kubelet[2521]: E0805 22:07:24.170404 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:26.085289 kubelet[2521]: E0805 22:07:26.084271 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slqdf" podUID="d42a17fc-6962-44ab-95c8-1eda8d16487a" Aug 5 22:07:26.379265 containerd[1445]: time="2024-08-05T22:07:26.378441575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:26.379644 containerd[1445]: time="2024-08-05T22:07:26.379586521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 22:07:26.380410 containerd[1445]: time="2024-08-05T22:07:26.380384231Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:26.383040 containerd[1445]: time="2024-08-05T22:07:26.382993678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:26.383861 containerd[1445]: time="2024-08-05T22:07:26.383833227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.214941019s" Aug 5 22:07:26.383921 containerd[1445]: time="2024-08-05T22:07:26.383863867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 22:07:26.386830 containerd[1445]: time="2024-08-05T22:07:26.386762110Z" level=info msg="CreateContainer within sandbox \"17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:07:26.396712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959165437.mount: Deactivated successfully. Aug 5 22:07:26.402582 containerd[1445]: time="2024-08-05T22:07:26.402519392Z" level=info msg="CreateContainer within sandbox \"17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e\"" Aug 5 22:07:26.403059 containerd[1445]: time="2024-08-05T22:07:26.403024385Z" level=info msg="StartContainer for \"35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e\"" Aug 5 22:07:26.434893 systemd[1]: Started cri-containerd-35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e.scope - libcontainer container 35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e. Aug 5 22:07:26.473593 containerd[1445]: time="2024-08-05T22:07:26.473540257Z" level=info msg="StartContainer for \"35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e\" returns successfully" Aug 5 22:07:26.988375 containerd[1445]: time="2024-08-05T22:07:26.988249412Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:07:26.990029 systemd[1]: cri-containerd-35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e.scope: Deactivated successfully. Aug 5 22:07:26.997678 kubelet[2521]: I0805 22:07:26.996925 2521 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:07:27.017198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e-rootfs.mount: Deactivated successfully. Aug 5 22:07:27.022299 kubelet[2521]: I0805 22:07:27.022234 2521 topology_manager.go:215] "Topology Admit Handler" podUID="55e6e3a9-eb75-4395-8ebe-8ebde1260f05" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jmgvh" Aug 5 22:07:27.023386 kubelet[2521]: I0805 22:07:27.023331 2521 topology_manager.go:215] "Topology Admit Handler" podUID="f58b54dd-c480-4c97-9c37-25028bd2a7be" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6s972" Aug 5 22:07:27.027907 kubelet[2521]: I0805 22:07:27.027788 2521 topology_manager.go:215] "Topology Admit Handler" podUID="fe57e78e-0c52-4bb4-bdcd-07e0068f41ee" podNamespace="calico-system" podName="calico-kube-controllers-dc555fc5c-8mlpm" Aug 5 22:07:27.032606 systemd[1]: Created slice kubepods-burstable-pod55e6e3a9_eb75_4395_8ebe_8ebde1260f05.slice - libcontainer container kubepods-burstable-pod55e6e3a9_eb75_4395_8ebe_8ebde1260f05.slice. Aug 5 22:07:27.037103 systemd[1]: Created slice kubepods-burstable-podf58b54dd_c480_4c97_9c37_25028bd2a7be.slice - libcontainer container kubepods-burstable-podf58b54dd_c480_4c97_9c37_25028bd2a7be.slice. Aug 5 22:07:27.040779 systemd[1]: Created slice kubepods-besteffort-podfe57e78e_0c52_4bb4_bdcd_07e0068f41ee.slice - libcontainer container kubepods-besteffort-podfe57e78e_0c52_4bb4_bdcd_07e0068f41ee.slice. Aug 5 22:07:27.060914 containerd[1445]: time="2024-08-05T22:07:27.060856224Z" level=info msg="shim disconnected" id=35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e namespace=k8s.io Aug 5 22:07:27.060914 containerd[1445]: time="2024-08-05T22:07:27.060912744Z" level=warning msg="cleaning up after shim disconnected" id=35371bda323cbab47a4265dd2b1fac3b73fe88d916d191fd504722b4a867729e namespace=k8s.io Aug 5 22:07:27.060914 containerd[1445]: time="2024-08-05T22:07:27.060921384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:07:27.178692 kubelet[2521]: E0805 22:07:27.178641 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:27.179048 kubelet[2521]: I0805 22:07:27.178714 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhgzr\" (UniqueName: \"kubernetes.io/projected/f58b54dd-c480-4c97-9c37-25028bd2a7be-kube-api-access-rhgzr\") pod \"coredns-7db6d8ff4d-6s972\" (UID: \"f58b54dd-c480-4c97-9c37-25028bd2a7be\") " pod="kube-system/coredns-7db6d8ff4d-6s972" Aug 5 22:07:27.179048 kubelet[2521]: I0805 22:07:27.178747 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e6e3a9-eb75-4395-8ebe-8ebde1260f05-config-volume\") pod \"coredns-7db6d8ff4d-jmgvh\" (UID: \"55e6e3a9-eb75-4395-8ebe-8ebde1260f05\") " pod="kube-system/coredns-7db6d8ff4d-jmgvh" Aug 5 22:07:27.179048 kubelet[2521]: I0805 22:07:27.178767 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv2z5\" (UniqueName: \"kubernetes.io/projected/fe57e78e-0c52-4bb4-bdcd-07e0068f41ee-kube-api-access-rv2z5\") pod \"calico-kube-controllers-dc555fc5c-8mlpm\" (UID: \"fe57e78e-0c52-4bb4-bdcd-07e0068f41ee\") " pod="calico-system/calico-kube-controllers-dc555fc5c-8mlpm" Aug 5 22:07:27.179048 kubelet[2521]: I0805 22:07:27.178786 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe57e78e-0c52-4bb4-bdcd-07e0068f41ee-tigera-ca-bundle\") pod \"calico-kube-controllers-dc555fc5c-8mlpm\" (UID: \"fe57e78e-0c52-4bb4-bdcd-07e0068f41ee\") " pod="calico-system/calico-kube-controllers-dc555fc5c-8mlpm" Aug 5 22:07:27.179048 kubelet[2521]: I0805 22:07:27.178803 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8prjd\" (UniqueName: \"kubernetes.io/projected/55e6e3a9-eb75-4395-8ebe-8ebde1260f05-kube-api-access-8prjd\") pod \"coredns-7db6d8ff4d-jmgvh\" (UID: \"55e6e3a9-eb75-4395-8ebe-8ebde1260f05\") " pod="kube-system/coredns-7db6d8ff4d-jmgvh" Aug 5 22:07:27.179157 kubelet[2521]: I0805 22:07:27.178818 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f58b54dd-c480-4c97-9c37-25028bd2a7be-config-volume\") pod \"coredns-7db6d8ff4d-6s972\" (UID: \"f58b54dd-c480-4c97-9c37-25028bd2a7be\") " pod="kube-system/coredns-7db6d8ff4d-6s972" Aug 5 22:07:27.181051 containerd[1445]: time="2024-08-05T22:07:27.180405492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:07:27.336295 kubelet[2521]: E0805 22:07:27.336252 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:27.336902 containerd[1445]: time="2024-08-05T22:07:27.336846004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmgvh,Uid:55e6e3a9-eb75-4395-8ebe-8ebde1260f05,Namespace:kube-system,Attempt:0,}" Aug 5 22:07:27.340400 kubelet[2521]: E0805 22:07:27.340353 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:27.341032 containerd[1445]: time="2024-08-05T22:07:27.340991235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6s972,Uid:f58b54dd-c480-4c97-9c37-25028bd2a7be,Namespace:kube-system,Attempt:0,}" Aug 5 22:07:27.346135 containerd[1445]: time="2024-08-05T22:07:27.346074775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc555fc5c-8mlpm,Uid:fe57e78e-0c52-4bb4-bdcd-07e0068f41ee,Namespace:calico-system,Attempt:0,}" Aug 5 22:07:27.585998 containerd[1445]: time="2024-08-05T22:07:27.585926102Z" level=error msg="Failed to destroy network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.586497 containerd[1445]: time="2024-08-05T22:07:27.586326018Z" level=error msg="encountered an error cleaning up failed sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.586497 containerd[1445]: time="2024-08-05T22:07:27.586397697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmgvh,Uid:55e6e3a9-eb75-4395-8ebe-8ebde1260f05,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.588220 kubelet[2521]: E0805 22:07:27.587808 2521 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.588220 kubelet[2521]: E0805 22:07:27.587877 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jmgvh" Aug 5 22:07:27.588220 kubelet[2521]: E0805 22:07:27.587898 2521 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jmgvh" Aug 5 22:07:27.588392 kubelet[2521]: E0805 22:07:27.587934 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jmgvh_kube-system(55e6e3a9-eb75-4395-8ebe-8ebde1260f05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jmgvh_kube-system(55e6e3a9-eb75-4395-8ebe-8ebde1260f05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jmgvh" podUID="55e6e3a9-eb75-4395-8ebe-8ebde1260f05" Aug 5 22:07:27.589245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f-shm.mount: Deactivated successfully. Aug 5 22:07:27.597849 containerd[1445]: time="2024-08-05T22:07:27.597785922Z" level=error msg="Failed to destroy network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.597996 containerd[1445]: time="2024-08-05T22:07:27.597857601Z" level=error msg="Failed to destroy network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.598214 containerd[1445]: time="2024-08-05T22:07:27.598180998Z" level=error msg="encountered an error cleaning up failed sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.598258 containerd[1445]: time="2024-08-05T22:07:27.598237517Z" level=error msg="encountered an error cleaning up failed sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.598314 containerd[1445]: time="2024-08-05T22:07:27.598287436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6s972,Uid:f58b54dd-c480-4c97-9c37-25028bd2a7be,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.598382 containerd[1445]: time="2024-08-05T22:07:27.598250597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc555fc5c-8mlpm,Uid:fe57e78e-0c52-4bb4-bdcd-07e0068f41ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.598577 kubelet[2521]: E0805 22:07:27.598534 2521 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.598659 kubelet[2521]: E0805 22:07:27.598600 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc555fc5c-8mlpm" Aug 5 22:07:27.598659 kubelet[2521]: E0805 22:07:27.598648 2521 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc555fc5c-8mlpm" Aug 5 22:07:27.598712 kubelet[2521]: E0805 22:07:27.598680 2521 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:27.598746 kubelet[2521]: E0805 22:07:27.598727 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6s972" Aug 5 22:07:27.598772 kubelet[2521]: E0805 22:07:27.598751 2521 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6s972" Aug 5 22:07:27.598798 kubelet[2521]: E0805 22:07:27.598779 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6s972_kube-system(f58b54dd-c480-4c97-9c37-25028bd2a7be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6s972_kube-system(f58b54dd-c480-4c97-9c37-25028bd2a7be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6s972" podUID="f58b54dd-c480-4c97-9c37-25028bd2a7be" Aug 5 22:07:27.598839 kubelet[2521]: E0805 22:07:27.598688 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dc555fc5c-8mlpm_calico-system(fe57e78e-0c52-4bb4-bdcd-07e0068f41ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dc555fc5c-8mlpm_calico-system(fe57e78e-0c52-4bb4-bdcd-07e0068f41ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc555fc5c-8mlpm" podUID="fe57e78e-0c52-4bb4-bdcd-07e0068f41ee" Aug 5 22:07:27.599803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de-shm.mount: Deactivated successfully. Aug 5 22:07:27.599902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec-shm.mount: Deactivated successfully. Aug 5 22:07:28.090466 systemd[1]: Created slice kubepods-besteffort-podd42a17fc_6962_44ab_95c8_1eda8d16487a.slice - libcontainer container kubepods-besteffort-podd42a17fc_6962_44ab_95c8_1eda8d16487a.slice. Aug 5 22:07:28.095858 containerd[1445]: time="2024-08-05T22:07:28.095817470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slqdf,Uid:d42a17fc-6962-44ab-95c8-1eda8d16487a,Namespace:calico-system,Attempt:0,}" Aug 5 22:07:28.145259 containerd[1445]: time="2024-08-05T22:07:28.145189283Z" level=error msg="Failed to destroy network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:28.145567 containerd[1445]: time="2024-08-05T22:07:28.145531079Z" level=error msg="encountered an error cleaning up failed sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:28.145625 containerd[1445]: time="2024-08-05T22:07:28.145595238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slqdf,Uid:d42a17fc-6962-44ab-95c8-1eda8d16487a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:28.145888 kubelet[2521]: E0805 22:07:28.145847 2521 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:28.145951 kubelet[2521]: E0805 22:07:28.145912 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-slqdf" Aug 5 22:07:28.145951 kubelet[2521]: E0805 22:07:28.145933 2521 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-slqdf" Aug 5 22:07:28.145998 kubelet[2521]: E0805 22:07:28.145974 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-slqdf_calico-system(d42a17fc-6962-44ab-95c8-1eda8d16487a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-slqdf_calico-system(d42a17fc-6962-44ab-95c8-1eda8d16487a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-slqdf" podUID="d42a17fc-6962-44ab-95c8-1eda8d16487a" Aug 5 22:07:28.183680 kubelet[2521]: I0805 22:07:28.183469 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:28.184949 containerd[1445]: time="2024-08-05T22:07:28.184125972Z" level=info msg="StopPodSandbox for \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\"" Aug 5 22:07:28.184949 containerd[1445]: time="2024-08-05T22:07:28.184354289Z" level=info msg="Ensure that sandbox 6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec in task-service has been cleanup successfully" Aug 5 22:07:28.189068 kubelet[2521]: I0805 22:07:28.188901 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:28.191078 containerd[1445]: time="2024-08-05T22:07:28.191036295Z" level=info msg="StopPodSandbox for \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\"" Aug 5 22:07:28.191436 containerd[1445]: time="2024-08-05T22:07:28.191287732Z" level=info msg="Ensure that sandbox 9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f in task-service has been cleanup successfully" Aug 5 22:07:28.193562 kubelet[2521]: I0805 22:07:28.193533 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:28.194519 containerd[1445]: time="2024-08-05T22:07:28.194482217Z" level=info msg="StopPodSandbox for \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\"" Aug 5 22:07:28.195322 kubelet[2521]: I0805 22:07:28.194963 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:28.195401 containerd[1445]: time="2024-08-05T22:07:28.194967212Z" level=info msg="Ensure that sandbox 4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58 in task-service has been cleanup successfully" Aug 5 22:07:28.195568 containerd[1445]: time="2024-08-05T22:07:28.195540205Z" level=info msg="StopPodSandbox for \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\"" Aug 5 22:07:28.196017 containerd[1445]: time="2024-08-05T22:07:28.195972521Z" level=info msg="Ensure that sandbox 5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de in task-service has been cleanup successfully" Aug 5 22:07:28.238597 containerd[1445]: time="2024-08-05T22:07:28.238526129Z" level=error msg="StopPodSandbox for \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\" failed" error="failed to destroy network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:28.238837 kubelet[2521]: E0805 22:07:28.238802 2521 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:28.238902 kubelet[2521]: E0805 22:07:28.238860 2521 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f"} Aug 5 22:07:28.239427 kubelet[2521]: E0805 22:07:28.238920 2521 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55e6e3a9-eb75-4395-8ebe-8ebde1260f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:07:28.239483 kubelet[2521]: E0805 22:07:28.239446 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55e6e3a9-eb75-4395-8ebe-8ebde1260f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jmgvh" podUID="55e6e3a9-eb75-4395-8ebe-8ebde1260f05" Aug 5 22:07:28.239606 containerd[1445]: time="2024-08-05T22:07:28.239567518Z" level=error msg="StopPodSandbox for \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\" failed" error="failed to destroy network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:28.239786 kubelet[2521]: E0805 22:07:28.239759 2521 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:28.239818 kubelet[2521]: E0805 22:07:28.239794 2521 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de"} Aug 5 22:07:28.239839 kubelet[2521]: E0805 22:07:28.239824 2521 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe57e78e-0c52-4bb4-bdcd-07e0068f41ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:07:28.239923 kubelet[2521]: E0805 22:07:28.239842 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe57e78e-0c52-4bb4-bdcd-07e0068f41ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc555fc5c-8mlpm" podUID="fe57e78e-0c52-4bb4-bdcd-07e0068f41ee" Aug 5 22:07:28.243122 containerd[1445]: time="2024-08-05T22:07:28.243062719Z" level=error msg="StopPodSandbox for \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\" failed" error="failed to destroy network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:28.243239 containerd[1445]: time="2024-08-05T22:07:28.243072599Z" level=error msg="StopPodSandbox for \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\" failed" error="failed to destroy network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:07:28.243355 kubelet[2521]: E0805 22:07:28.243294 2521 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:28.243402 kubelet[2521]: E0805 22:07:28.243362 2521 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec"} Aug 5 22:07:28.243402 kubelet[2521]: E0805 22:07:28.243388 2521 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f58b54dd-c480-4c97-9c37-25028bd2a7be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:07:28.243472 kubelet[2521]: E0805 22:07:28.243407 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f58b54dd-c480-4c97-9c37-25028bd2a7be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6s972" podUID="f58b54dd-c480-4c97-9c37-25028bd2a7be" Aug 5 22:07:28.243472 kubelet[2521]: E0805 22:07:28.243437 2521 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:28.243472 kubelet[2521]: E0805 22:07:28.243451 2521 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58"} Aug 5 22:07:28.243555 kubelet[2521]: E0805 22:07:28.243467 2521 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d42a17fc-6962-44ab-95c8-1eda8d16487a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:07:28.243555 kubelet[2521]: E0805 22:07:28.243485 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d42a17fc-6962-44ab-95c8-1eda8d16487a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-slqdf" podUID="d42a17fc-6962-44ab-95c8-1eda8d16487a" Aug 5 22:07:28.396320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58-shm.mount: Deactivated successfully. Aug 5 22:07:29.086052 systemd[1]: Started sshd@7-10.0.0.62:22-10.0.0.1:40644.service - OpenSSH per-connection server daemon (10.0.0.1:40644). Aug 5 22:07:29.181236 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 40644 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:29.182892 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:29.187976 systemd-logind[1423]: New session 8 of user core. Aug 5 22:07:29.192777 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:07:29.320814 sshd[3511]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:29.324589 systemd[1]: sshd@7-10.0.0.62:22-10.0.0.1:40644.service: Deactivated successfully. Aug 5 22:07:29.330169 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:07:29.331374 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:07:29.333267 systemd-logind[1423]: Removed session 8. Aug 5 22:07:30.707530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033748869.mount: Deactivated successfully. Aug 5 22:07:30.894679 containerd[1445]: time="2024-08-05T22:07:30.894073493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:30.895098 containerd[1445]: time="2024-08-05T22:07:30.894707166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 22:07:30.895389 containerd[1445]: time="2024-08-05T22:07:30.895355160Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:30.897620 containerd[1445]: time="2024-08-05T22:07:30.897574019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:30.898133 containerd[1445]: time="2024-08-05T22:07:30.898096173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.717651202s" Aug 5 22:07:30.898164 containerd[1445]: time="2024-08-05T22:07:30.898135293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 22:07:30.908638 containerd[1445]: time="2024-08-05T22:07:30.908236995Z" level=info msg="CreateContainer within sandbox \"17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:07:30.923107 containerd[1445]: time="2024-08-05T22:07:30.923051891Z" level=info msg="CreateContainer within sandbox \"17295f582343479cb4e7ea2009998a255b148ecbf6acc71f183b565a79b21167\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"74c21cd31b14f22d2f55a7b4648525df79310bc38dc597771b40edc2db5acf25\"" Aug 5 22:07:30.923774 containerd[1445]: time="2024-08-05T22:07:30.923747124Z" level=info msg="StartContainer for \"74c21cd31b14f22d2f55a7b4648525df79310bc38dc597771b40edc2db5acf25\"" Aug 5 22:07:30.967846 systemd[1]: Started cri-containerd-74c21cd31b14f22d2f55a7b4648525df79310bc38dc597771b40edc2db5acf25.scope - libcontainer container 74c21cd31b14f22d2f55a7b4648525df79310bc38dc597771b40edc2db5acf25. Aug 5 22:07:31.119682 containerd[1445]: time="2024-08-05T22:07:31.119631130Z" level=info msg="StartContainer for \"74c21cd31b14f22d2f55a7b4648525df79310bc38dc597771b40edc2db5acf25\" returns successfully" Aug 5 22:07:31.162394 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:07:31.162573 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:07:31.217486 kubelet[2521]: E0805 22:07:31.217454 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:31.236902 kubelet[2521]: I0805 22:07:31.236780 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sj6b7" podStartSLOduration=1.115842299 podStartE2EDuration="13.236764101s" podCreationTimestamp="2024-08-05 22:07:18 +0000 UTC" firstStartedPulling="2024-08-05 22:07:18.778070923 +0000 UTC m=+22.763729987" lastFinishedPulling="2024-08-05 22:07:30.898992725 +0000 UTC m=+34.884651789" observedRunningTime="2024-08-05 22:07:31.235008917 +0000 UTC m=+35.220667981" watchObservedRunningTime="2024-08-05 22:07:31.236764101 +0000 UTC m=+35.222423165" Aug 5 22:07:32.218891 kubelet[2521]: E0805 22:07:32.218781 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:32.759142 systemd-networkd[1374]: vxlan.calico: Link UP Aug 5 22:07:32.759151 systemd-networkd[1374]: vxlan.calico: Gained carrier Aug 5 22:07:34.335978 systemd[1]: Started sshd@8-10.0.0.62:22-10.0.0.1:59030.service - OpenSSH per-connection server daemon (10.0.0.1:59030). Aug 5 22:07:34.374724 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 59030 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:34.376341 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:34.380658 systemd-logind[1423]: New session 9 of user core. Aug 5 22:07:34.387785 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:07:34.508875 sshd[3845]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:34.512957 systemd[1]: sshd@8-10.0.0.62:22-10.0.0.1:59030.service: Deactivated successfully. Aug 5 22:07:34.515023 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:07:34.515946 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:07:34.516849 systemd-logind[1423]: Removed session 9. Aug 5 22:07:34.765796 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Aug 5 22:07:39.519126 systemd[1]: Started sshd@9-10.0.0.62:22-10.0.0.1:59032.service - OpenSSH per-connection server daemon (10.0.0.1:59032). Aug 5 22:07:39.553182 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 59032 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:39.554382 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:39.558699 systemd-logind[1423]: New session 10 of user core. Aug 5 22:07:39.569782 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:07:39.691266 sshd[3873]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:39.703300 systemd[1]: sshd@9-10.0.0.62:22-10.0.0.1:59032.service: Deactivated successfully. Aug 5 22:07:39.705178 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:07:39.706570 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:07:39.715960 systemd[1]: Started sshd@10-10.0.0.62:22-10.0.0.1:59040.service - OpenSSH per-connection server daemon (10.0.0.1:59040). Aug 5 22:07:39.717678 systemd-logind[1423]: Removed session 10. Aug 5 22:07:39.745535 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 59040 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:39.746800 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:39.750814 systemd-logind[1423]: New session 11 of user core. Aug 5 22:07:39.763782 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:07:39.932771 sshd[3889]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:39.941805 systemd[1]: sshd@10-10.0.0.62:22-10.0.0.1:59040.service: Deactivated successfully. Aug 5 22:07:39.944861 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:07:39.948823 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:07:39.958906 systemd[1]: Started sshd@11-10.0.0.62:22-10.0.0.1:59050.service - OpenSSH per-connection server daemon (10.0.0.1:59050). Aug 5 22:07:39.959799 systemd-logind[1423]: Removed session 11. Aug 5 22:07:39.995568 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 59050 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:39.996051 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:39.999856 systemd-logind[1423]: New session 12 of user core. Aug 5 22:07:40.018783 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:07:40.139670 sshd[3904]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:40.143118 systemd[1]: sshd@11-10.0.0.62:22-10.0.0.1:59050.service: Deactivated successfully. Aug 5 22:07:40.144610 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:07:40.145401 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:07:40.146267 systemd-logind[1423]: Removed session 12. Aug 5 22:07:41.085136 containerd[1445]: time="2024-08-05T22:07:41.084982906Z" level=info msg="StopPodSandbox for \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\"" Aug 5 22:07:41.085545 containerd[1445]: time="2024-08-05T22:07:41.085178905Z" level=info msg="StopPodSandbox for \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\"" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.198 [INFO][3952] k8s.go 608: Cleaning up netns ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.198 [INFO][3952] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" iface="eth0" netns="/var/run/netns/cni-62d1d34b-a139-7365-6e8e-7c7c403a0b9e" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.198 [INFO][3952] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" iface="eth0" netns="/var/run/netns/cni-62d1d34b-a139-7365-6e8e-7c7c403a0b9e" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.199 [INFO][3952] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" iface="eth0" netns="/var/run/netns/cni-62d1d34b-a139-7365-6e8e-7c7c403a0b9e" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.199 [INFO][3952] k8s.go 615: Releasing IP address(es) ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.199 [INFO][3952] utils.go 188: Calico CNI releasing IP address ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.349 [INFO][3968] ipam_plugin.go 411: Releasing address using handleID ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.350 [INFO][3968] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.350 [INFO][3968] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.360 [WARNING][3968] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.360 [INFO][3968] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.361 [INFO][3968] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:41.365375 containerd[1445]: 2024-08-05 22:07:41.363 [INFO][3952] k8s.go 621: Teardown processing complete. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:41.366645 containerd[1445]: time="2024-08-05T22:07:41.365418964Z" level=info msg="TearDown network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\" successfully" Aug 5 22:07:41.366645 containerd[1445]: time="2024-08-05T22:07:41.365454804Z" level=info msg="StopPodSandbox for \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\" returns successfully" Aug 5 22:07:41.366820 kubelet[2521]: E0805 22:07:41.365983 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:41.367883 containerd[1445]: time="2024-08-05T22:07:41.367848272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmgvh,Uid:55e6e3a9-eb75-4395-8ebe-8ebde1260f05,Namespace:kube-system,Attempt:1,}" Aug 5 22:07:41.369197 systemd[1]: run-netns-cni\x2d62d1d34b\x2da139\x2d7365\x2d6e8e\x2d7c7c403a0b9e.mount: Deactivated successfully. Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.189 [INFO][3951] k8s.go 608: Cleaning up netns ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.189 [INFO][3951] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" iface="eth0" netns="/var/run/netns/cni-5be86758-6508-87bf-beed-3f48b2d3fb17" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.193 [INFO][3951] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" iface="eth0" netns="/var/run/netns/cni-5be86758-6508-87bf-beed-3f48b2d3fb17" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.194 [INFO][3951] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" iface="eth0" netns="/var/run/netns/cni-5be86758-6508-87bf-beed-3f48b2d3fb17" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.194 [INFO][3951] k8s.go 615: Releasing IP address(es) ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.194 [INFO][3951] utils.go 188: Calico CNI releasing IP address ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.349 [INFO][3967] ipam_plugin.go 411: Releasing address using handleID ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.350 [INFO][3967] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.361 [INFO][3967] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.371 [WARNING][3967] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.371 [INFO][3967] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.378 [INFO][3967] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:41.382417 containerd[1445]: 2024-08-05 22:07:41.380 [INFO][3951] k8s.go 621: Teardown processing complete. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:41.382844 containerd[1445]: time="2024-08-05T22:07:41.382585362Z" level=info msg="TearDown network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\" successfully" Aug 5 22:07:41.382844 containerd[1445]: time="2024-08-05T22:07:41.382641202Z" level=info msg="StopPodSandbox for \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\" returns successfully" Aug 5 22:07:41.383150 kubelet[2521]: E0805 22:07:41.383090 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:41.383704 containerd[1445]: time="2024-08-05T22:07:41.383429238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6s972,Uid:f58b54dd-c480-4c97-9c37-25028bd2a7be,Namespace:kube-system,Attempt:1,}" Aug 5 22:07:41.385916 systemd[1]: run-netns-cni\x2d5be86758\x2d6508\x2d87bf\x2dbeed\x2d3f48b2d3fb17.mount: Deactivated successfully. Aug 5 22:07:41.534946 systemd-networkd[1374]: calie2da93bbd69: Link UP Aug 5 22:07:41.535292 systemd-networkd[1374]: calie2da93bbd69: Gained carrier Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.463 [INFO][3988] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--6s972-eth0 coredns-7db6d8ff4d- kube-system f58b54dd-c480-4c97-9c37-25028bd2a7be 815 0 2024-08-05 22:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-6s972 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2da93bbd69 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6s972" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6s972-" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.463 [INFO][3988] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6s972" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.492 [INFO][4009] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" HandleID="k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.502 [INFO][4009] ipam_plugin.go 264: Auto assigning IP ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" HandleID="k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400069b660), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-6s972", "timestamp":"2024-08-05 22:07:41.492353157 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.502 [INFO][4009] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.502 [INFO][4009] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.503 [INFO][4009] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.505 [INFO][4009] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.510 [INFO][4009] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.516 [INFO][4009] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.517 [INFO][4009] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.519 [INFO][4009] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.519 [INFO][4009] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.521 [INFO][4009] ipam.go 1685: Creating new handle: k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84 Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.524 [INFO][4009] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.528 [INFO][4009] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.528 [INFO][4009] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" host="localhost" Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.528 [INFO][4009] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:41.556863 containerd[1445]: 2024-08-05 22:07:41.529 [INFO][4009] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" HandleID="k8s-pod-network.b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.557839 containerd[1445]: 2024-08-05 22:07:41.532 [INFO][3988] k8s.go 386: Populated endpoint ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6s972" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6s972-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f58b54dd-c480-4c97-9c37-25028bd2a7be", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-6s972", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2da93bbd69", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:41.557839 containerd[1445]: 2024-08-05 22:07:41.532 [INFO][3988] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6s972" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.557839 containerd[1445]: 2024-08-05 22:07:41.533 [INFO][3988] dataplane_linux.go 68: Setting the host side veth name to calie2da93bbd69 ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6s972" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.557839 containerd[1445]: 2024-08-05 22:07:41.535 [INFO][3988] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6s972" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.557839 containerd[1445]: 2024-08-05 22:07:41.535 [INFO][3988] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6s972" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6s972-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f58b54dd-c480-4c97-9c37-25028bd2a7be", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84", Pod:"coredns-7db6d8ff4d-6s972", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2da93bbd69", MAC:"3a:ca:f1:5b:a4:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:41.557839 containerd[1445]: 2024-08-05 22:07:41.548 [INFO][3988] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6s972" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:41.566810 systemd-networkd[1374]: calie51f7a02a2e: Link UP Aug 5 22:07:41.567301 systemd-networkd[1374]: calie51f7a02a2e: Gained carrier Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.458 [INFO][3983] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0 coredns-7db6d8ff4d- kube-system 55e6e3a9-eb75-4395-8ebe-8ebde1260f05 816 0 2024-08-05 22:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-jmgvh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie51f7a02a2e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jmgvh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jmgvh-" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.459 [INFO][3983] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jmgvh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.493 [INFO][4008] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" HandleID="k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.508 [INFO][4008] ipam_plugin.go 264: Auto assigning IP ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" HandleID="k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-jmgvh", "timestamp":"2024-08-05 22:07:41.49372095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.508 [INFO][4008] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.528 [INFO][4008] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.529 [INFO][4008] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.530 [INFO][4008] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.536 [INFO][4008] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.544 [INFO][4008] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.548 [INFO][4008] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.551 [INFO][4008] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.551 [INFO][4008] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.553 [INFO][4008] ipam.go 1685: Creating new handle: k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25 Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.558 [INFO][4008] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.563 [INFO][4008] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.563 [INFO][4008] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" host="localhost" Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.563 [INFO][4008] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:41.579977 containerd[1445]: 2024-08-05 22:07:41.563 [INFO][4008] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" HandleID="k8s-pod-network.de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.580603 containerd[1445]: 2024-08-05 22:07:41.565 [INFO][3983] k8s.go 386: Populated endpoint ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jmgvh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55e6e3a9-eb75-4395-8ebe-8ebde1260f05", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-jmgvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie51f7a02a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:41.580603 containerd[1445]: 2024-08-05 22:07:41.565 [INFO][3983] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jmgvh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.580603 containerd[1445]: 2024-08-05 22:07:41.565 [INFO][3983] dataplane_linux.go 68: Setting the host side veth name to calie51f7a02a2e ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jmgvh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.580603 containerd[1445]: 2024-08-05 22:07:41.567 [INFO][3983] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jmgvh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.580603 containerd[1445]: 2024-08-05 22:07:41.567 [INFO][3983] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jmgvh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55e6e3a9-eb75-4395-8ebe-8ebde1260f05", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25", Pod:"coredns-7db6d8ff4d-jmgvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie51f7a02a2e", MAC:"ee:40:53:94:7b:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:41.580603 containerd[1445]: 2024-08-05 22:07:41.578 [INFO][3983] k8s.go 500: Wrote updated endpoint to datastore ContainerID="de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jmgvh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:41.591235 containerd[1445]: time="2024-08-05T22:07:41.591158844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:41.591235 containerd[1445]: time="2024-08-05T22:07:41.591208204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:41.591235 containerd[1445]: time="2024-08-05T22:07:41.591221763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:41.591235 containerd[1445]: time="2024-08-05T22:07:41.591231363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:41.599286 containerd[1445]: time="2024-08-05T22:07:41.598248610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:41.599784 containerd[1445]: time="2024-08-05T22:07:41.599525644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:41.599784 containerd[1445]: time="2024-08-05T22:07:41.599643683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:41.599784 containerd[1445]: time="2024-08-05T22:07:41.599658443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:41.634816 systemd[1]: Started cri-containerd-de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25.scope - libcontainer container de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25. Aug 5 22:07:41.638018 systemd[1]: Started cri-containerd-b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84.scope - libcontainer container b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84. Aug 5 22:07:41.645987 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:07:41.649836 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:07:41.672241 containerd[1445]: time="2024-08-05T22:07:41.672123976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmgvh,Uid:55e6e3a9-eb75-4395-8ebe-8ebde1260f05,Namespace:kube-system,Attempt:1,} returns sandbox id \"de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25\"" Aug 5 22:07:41.673014 kubelet[2521]: E0805 22:07:41.672978 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:41.676251 containerd[1445]: time="2024-08-05T22:07:41.676034198Z" level=info msg="CreateContainer within sandbox \"de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:07:41.678391 containerd[1445]: time="2024-08-05T22:07:41.678350067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6s972,Uid:f58b54dd-c480-4c97-9c37-25028bd2a7be,Namespace:kube-system,Attempt:1,} returns sandbox id \"b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84\"" Aug 5 22:07:41.679908 kubelet[2521]: E0805 22:07:41.679881 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:41.684516 containerd[1445]: time="2024-08-05T22:07:41.684441237Z" level=info msg="CreateContainer within sandbox \"b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:07:41.714866 containerd[1445]: time="2024-08-05T22:07:41.714816892Z" level=info msg="CreateContainer within sandbox \"de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68128dcc885f6dcdf3b09520ae505ac7c7094a5d9b55c0d11656d3efd958a236\"" Aug 5 22:07:41.715397 containerd[1445]: time="2024-08-05T22:07:41.715327930Z" level=info msg="StartContainer for \"68128dcc885f6dcdf3b09520ae505ac7c7094a5d9b55c0d11656d3efd958a236\"" Aug 5 22:07:41.715910 containerd[1445]: time="2024-08-05T22:07:41.715879647Z" level=info msg="CreateContainer within sandbox \"b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ef5ec29b44373e49302eda2db933a0a5c720cbf8c7aebd6f3506deee4090a44\"" Aug 5 22:07:41.717021 containerd[1445]: time="2024-08-05T22:07:41.716992642Z" level=info msg="StartContainer for \"5ef5ec29b44373e49302eda2db933a0a5c720cbf8c7aebd6f3506deee4090a44\"" Aug 5 22:07:41.742836 systemd[1]: Started cri-containerd-68128dcc885f6dcdf3b09520ae505ac7c7094a5d9b55c0d11656d3efd958a236.scope - libcontainer container 68128dcc885f6dcdf3b09520ae505ac7c7094a5d9b55c0d11656d3efd958a236. Aug 5 22:07:41.745547 systemd[1]: Started cri-containerd-5ef5ec29b44373e49302eda2db933a0a5c720cbf8c7aebd6f3506deee4090a44.scope - libcontainer container 5ef5ec29b44373e49302eda2db933a0a5c720cbf8c7aebd6f3506deee4090a44. Aug 5 22:07:41.853656 containerd[1445]: time="2024-08-05T22:07:41.853508188Z" level=info msg="StartContainer for \"68128dcc885f6dcdf3b09520ae505ac7c7094a5d9b55c0d11656d3efd958a236\" returns successfully" Aug 5 22:07:41.853656 containerd[1445]: time="2024-08-05T22:07:41.853590988Z" level=info msg="StartContainer for \"5ef5ec29b44373e49302eda2db933a0a5c720cbf8c7aebd6f3506deee4090a44\" returns successfully" Aug 5 22:07:42.086191 containerd[1445]: time="2024-08-05T22:07:42.085659463Z" level=info msg="StopPodSandbox for \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\"" Aug 5 22:07:42.086191 containerd[1445]: time="2024-08-05T22:07:42.086068061Z" level=info msg="StopPodSandbox for \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\"" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.135 [INFO][4254] k8s.go 608: Cleaning up netns ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.135 [INFO][4254] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" iface="eth0" netns="/var/run/netns/cni-3473943d-3274-cef3-7eb3-c202a6887ceb" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.136 [INFO][4254] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" iface="eth0" netns="/var/run/netns/cni-3473943d-3274-cef3-7eb3-c202a6887ceb" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.137 [INFO][4254] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" iface="eth0" netns="/var/run/netns/cni-3473943d-3274-cef3-7eb3-c202a6887ceb" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.137 [INFO][4254] k8s.go 615: Releasing IP address(es) ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.137 [INFO][4254] utils.go 188: Calico CNI releasing IP address ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.156 [INFO][4265] ipam_plugin.go 411: Releasing address using handleID ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.156 [INFO][4265] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.157 [INFO][4265] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.166 [WARNING][4265] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.166 [INFO][4265] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.168 [INFO][4265] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:42.178730 containerd[1445]: 2024-08-05 22:07:42.172 [INFO][4254] k8s.go 621: Teardown processing complete. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:42.179108 containerd[1445]: time="2024-08-05T22:07:42.178936684Z" level=info msg="TearDown network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\" successfully" Aug 5 22:07:42.179108 containerd[1445]: time="2024-08-05T22:07:42.178966364Z" level=info msg="StopPodSandbox for \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\" returns successfully" Aug 5 22:07:42.182349 containerd[1445]: time="2024-08-05T22:07:42.181961551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slqdf,Uid:d42a17fc-6962-44ab-95c8-1eda8d16487a,Namespace:calico-system,Attempt:1,}" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.134 [INFO][4245] k8s.go 608: Cleaning up netns ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.135 [INFO][4245] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" iface="eth0" netns="/var/run/netns/cni-ffb30566-24e5-f0c8-556f-4ebad56e1bcf" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.135 [INFO][4245] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" iface="eth0" netns="/var/run/netns/cni-ffb30566-24e5-f0c8-556f-4ebad56e1bcf" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.135 [INFO][4245] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" iface="eth0" netns="/var/run/netns/cni-ffb30566-24e5-f0c8-556f-4ebad56e1bcf" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.135 [INFO][4245] k8s.go 615: Releasing IP address(es) ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.135 [INFO][4245] utils.go 188: Calico CNI releasing IP address ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.161 [INFO][4264] ipam_plugin.go 411: Releasing address using handleID ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.162 [INFO][4264] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.168 [INFO][4264] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.177 [WARNING][4264] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.177 [INFO][4264] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.178 [INFO][4264] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:42.183702 containerd[1445]: 2024-08-05 22:07:42.180 [INFO][4245] k8s.go 621: Teardown processing complete. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:42.184011 containerd[1445]: time="2024-08-05T22:07:42.183808182Z" level=info msg="TearDown network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\" successfully" Aug 5 22:07:42.184011 containerd[1445]: time="2024-08-05T22:07:42.183826502Z" level=info msg="StopPodSandbox for \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\" returns successfully" Aug 5 22:07:42.184351 containerd[1445]: time="2024-08-05T22:07:42.184310780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc555fc5c-8mlpm,Uid:fe57e78e-0c52-4bb4-bdcd-07e0068f41ee,Namespace:calico-system,Attempt:1,}" Aug 5 22:07:42.240635 kubelet[2521]: E0805 22:07:42.240596 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:42.245869 kubelet[2521]: E0805 22:07:42.245836 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:42.256952 kubelet[2521]: I0805 22:07:42.256860 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6s972" podStartSLOduration=32.256841295 podStartE2EDuration="32.256841295s" podCreationTimestamp="2024-08-05 22:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:07:42.25572338 +0000 UTC m=+46.241382444" watchObservedRunningTime="2024-08-05 22:07:42.256841295 +0000 UTC m=+46.242500359" Aug 5 22:07:42.286899 kubelet[2521]: I0805 22:07:42.286834 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jmgvh" podStartSLOduration=32.28681196 podStartE2EDuration="32.28681196s" podCreationTimestamp="2024-08-05 22:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:07:42.285927764 +0000 UTC m=+46.271586828" watchObservedRunningTime="2024-08-05 22:07:42.28681196 +0000 UTC m=+46.272471104" Aug 5 22:07:42.352931 systemd-networkd[1374]: caliaaf9e930af4: Link UP Aug 5 22:07:42.355438 systemd-networkd[1374]: caliaaf9e930af4: Gained carrier Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.236 [INFO][4279] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--slqdf-eth0 csi-node-driver- calico-system d42a17fc-6962-44ab-95c8-1eda8d16487a 836 0 2024-08-05 22:07:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-slqdf eth0 default [] [] [kns.calico-system ksa.calico-system.default] caliaaf9e930af4 [] []}} ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Namespace="calico-system" Pod="csi-node-driver-slqdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--slqdf-" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.236 [INFO][4279] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Namespace="calico-system" Pod="csi-node-driver-slqdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.302 [INFO][4307] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" HandleID="k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.313 [INFO][4307] ipam_plugin.go 264: Auto assigning IP ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" HandleID="k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000301c80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-slqdf", "timestamp":"2024-08-05 22:07:42.302235211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.314 [INFO][4307] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.314 [INFO][4307] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.314 [INFO][4307] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.316 [INFO][4307] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.322 [INFO][4307] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.328 [INFO][4307] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.330 [INFO][4307] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.332 [INFO][4307] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.332 [INFO][4307] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.334 [INFO][4307] ipam.go 1685: Creating new handle: k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.337 [INFO][4307] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.342 [INFO][4307] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.342 [INFO][4307] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" host="localhost" Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.342 [INFO][4307] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:42.374390 containerd[1445]: 2024-08-05 22:07:42.342 [INFO][4307] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" HandleID="k8s-pod-network.876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.375006 containerd[1445]: 2024-08-05 22:07:42.346 [INFO][4279] k8s.go 386: Populated endpoint ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Namespace="calico-system" Pod="csi-node-driver-slqdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--slqdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--slqdf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d42a17fc-6962-44ab-95c8-1eda8d16487a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-slqdf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliaaf9e930af4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:42.375006 containerd[1445]: 2024-08-05 22:07:42.346 [INFO][4279] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Namespace="calico-system" Pod="csi-node-driver-slqdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.375006 containerd[1445]: 2024-08-05 22:07:42.346 [INFO][4279] dataplane_linux.go 68: Setting the host side veth name to caliaaf9e930af4 ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Namespace="calico-system" Pod="csi-node-driver-slqdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.375006 containerd[1445]: 2024-08-05 22:07:42.356 [INFO][4279] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Namespace="calico-system" Pod="csi-node-driver-slqdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.375006 containerd[1445]: 2024-08-05 22:07:42.357 [INFO][4279] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Namespace="calico-system" Pod="csi-node-driver-slqdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--slqdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--slqdf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d42a17fc-6962-44ab-95c8-1eda8d16487a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd", Pod:"csi-node-driver-slqdf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliaaf9e930af4", MAC:"be:28:4e:33:cd:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:42.375006 containerd[1445]: 2024-08-05 22:07:42.369 [INFO][4279] k8s.go 500: Wrote updated endpoint to datastore ContainerID="876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd" Namespace="calico-system" Pod="csi-node-driver-slqdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:42.376493 systemd[1]: run-netns-cni\x2d3473943d\x2d3274\x2dcef3\x2d7eb3\x2dc202a6887ceb.mount: Deactivated successfully. Aug 5 22:07:42.376580 systemd[1]: run-netns-cni\x2dffb30566\x2d24e5\x2df0c8\x2d556f\x2d4ebad56e1bcf.mount: Deactivated successfully. Aug 5 22:07:42.392990 systemd-networkd[1374]: cali757d4220a17: Link UP Aug 5 22:07:42.393291 systemd-networkd[1374]: cali757d4220a17: Gained carrier Aug 5 22:07:42.398731 containerd[1445]: time="2024-08-05T22:07:42.397820102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:42.398731 containerd[1445]: time="2024-08-05T22:07:42.397977542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:42.398731 containerd[1445]: time="2024-08-05T22:07:42.398005462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:42.398731 containerd[1445]: time="2024-08-05T22:07:42.398058381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.275 [INFO][4292] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0 calico-kube-controllers-dc555fc5c- calico-system fe57e78e-0c52-4bb4-bdcd-07e0068f41ee 835 0 2024-08-05 22:07:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dc555fc5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dc555fc5c-8mlpm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali757d4220a17 [] []}} ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Namespace="calico-system" Pod="calico-kube-controllers-dc555fc5c-8mlpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.275 [INFO][4292] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Namespace="calico-system" Pod="calico-kube-controllers-dc555fc5c-8mlpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.322 [INFO][4315] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" HandleID="k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.340 [INFO][4315] ipam_plugin.go 264: Auto assigning IP ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" HandleID="k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028ca50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dc555fc5c-8mlpm", "timestamp":"2024-08-05 22:07:42.322886519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.340 [INFO][4315] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.342 [INFO][4315] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.342 [INFO][4315] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.345 [INFO][4315] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.352 [INFO][4315] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.363 [INFO][4315] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.365 [INFO][4315] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.371 [INFO][4315] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.371 [INFO][4315] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.375 [INFO][4315] ipam.go 1685: Creating new handle: k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992 Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.379 [INFO][4315] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.386 [INFO][4315] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.386 [INFO][4315] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" host="localhost" Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.386 [INFO][4315] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:42.407688 containerd[1445]: 2024-08-05 22:07:42.386 [INFO][4315] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" HandleID="k8s-pod-network.1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.408287 containerd[1445]: 2024-08-05 22:07:42.389 [INFO][4292] k8s.go 386: Populated endpoint ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Namespace="calico-system" Pod="calico-kube-controllers-dc555fc5c-8mlpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0", GenerateName:"calico-kube-controllers-dc555fc5c-", Namespace:"calico-system", SelfLink:"", UID:"fe57e78e-0c52-4bb4-bdcd-07e0068f41ee", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc555fc5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dc555fc5c-8mlpm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali757d4220a17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:42.408287 containerd[1445]: 2024-08-05 22:07:42.389 [INFO][4292] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Namespace="calico-system" Pod="calico-kube-controllers-dc555fc5c-8mlpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.408287 containerd[1445]: 2024-08-05 22:07:42.389 [INFO][4292] dataplane_linux.go 68: Setting the host side veth name to cali757d4220a17 ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Namespace="calico-system" Pod="calico-kube-controllers-dc555fc5c-8mlpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.408287 containerd[1445]: 2024-08-05 22:07:42.393 [INFO][4292] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Namespace="calico-system" Pod="calico-kube-controllers-dc555fc5c-8mlpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.408287 containerd[1445]: 2024-08-05 22:07:42.394 [INFO][4292] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Namespace="calico-system" Pod="calico-kube-controllers-dc555fc5c-8mlpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0", GenerateName:"calico-kube-controllers-dc555fc5c-", Namespace:"calico-system", SelfLink:"", UID:"fe57e78e-0c52-4bb4-bdcd-07e0068f41ee", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc555fc5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992", Pod:"calico-kube-controllers-dc555fc5c-8mlpm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali757d4220a17", MAC:"92:bf:91:d0:11:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:42.408287 containerd[1445]: 2024-08-05 22:07:42.405 [INFO][4292] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992" Namespace="calico-system" Pod="calico-kube-controllers-dc555fc5c-8mlpm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:42.426922 systemd[1]: Started cri-containerd-876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd.scope - libcontainer container 876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd. Aug 5 22:07:42.431891 containerd[1445]: time="2024-08-05T22:07:42.431451271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:42.431891 containerd[1445]: time="2024-08-05T22:07:42.431854430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:42.431891 containerd[1445]: time="2024-08-05T22:07:42.431869710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:42.432059 containerd[1445]: time="2024-08-05T22:07:42.432020109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:42.441875 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:07:42.450782 systemd[1]: Started cri-containerd-1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992.scope - libcontainer container 1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992. Aug 5 22:07:42.456851 containerd[1445]: time="2024-08-05T22:07:42.456801638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slqdf,Uid:d42a17fc-6962-44ab-95c8-1eda8d16487a,Namespace:calico-system,Attempt:1,} returns sandbox id \"876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd\"" Aug 5 22:07:42.458372 containerd[1445]: time="2024-08-05T22:07:42.458348791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:07:42.463834 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:07:42.480057 containerd[1445]: time="2024-08-05T22:07:42.479957414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc555fc5c-8mlpm,Uid:fe57e78e-0c52-4bb4-bdcd-07e0068f41ee,Namespace:calico-system,Attempt:1,} returns sandbox id \"1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992\"" Aug 5 22:07:42.829748 systemd-networkd[1374]: calie51f7a02a2e: Gained IPv6LL Aug 5 22:07:42.830109 systemd-networkd[1374]: calie2da93bbd69: Gained IPv6LL Aug 5 22:07:43.249999 kubelet[2521]: E0805 22:07:43.249849 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:43.250391 kubelet[2521]: E0805 22:07:43.250184 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:43.469854 systemd-networkd[1374]: caliaaf9e930af4: Gained IPv6LL Aug 5 22:07:43.709714 containerd[1445]: time="2024-08-05T22:07:43.709661536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:43.710535 containerd[1445]: time="2024-08-05T22:07:43.710164374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 22:07:43.710948 containerd[1445]: time="2024-08-05T22:07:43.710899371Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:43.712925 containerd[1445]: time="2024-08-05T22:07:43.712897042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:43.713533 containerd[1445]: time="2024-08-05T22:07:43.713498880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.255117529s" Aug 5 22:07:43.713655 containerd[1445]: time="2024-08-05T22:07:43.713535759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 22:07:43.715025 containerd[1445]: time="2024-08-05T22:07:43.714801394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:07:43.716275 containerd[1445]: time="2024-08-05T22:07:43.716144829Z" level=info msg="CreateContainer within sandbox \"876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:07:43.747178 containerd[1445]: time="2024-08-05T22:07:43.747120738Z" level=info msg="CreateContainer within sandbox \"876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9e231b29c2a3b17ff522eb038ba9eea4e1e334ca23121d0351ffbc6b5034fbb6\"" Aug 5 22:07:43.748320 containerd[1445]: time="2024-08-05T22:07:43.747796975Z" level=info msg="StartContainer for \"9e231b29c2a3b17ff522eb038ba9eea4e1e334ca23121d0351ffbc6b5034fbb6\"" Aug 5 22:07:43.783781 systemd[1]: Started cri-containerd-9e231b29c2a3b17ff522eb038ba9eea4e1e334ca23121d0351ffbc6b5034fbb6.scope - libcontainer container 9e231b29c2a3b17ff522eb038ba9eea4e1e334ca23121d0351ffbc6b5034fbb6. Aug 5 22:07:43.811537 containerd[1445]: time="2024-08-05T22:07:43.810129113Z" level=info msg="StartContainer for \"9e231b29c2a3b17ff522eb038ba9eea4e1e334ca23121d0351ffbc6b5034fbb6\" returns successfully" Aug 5 22:07:44.253887 kubelet[2521]: E0805 22:07:44.253071 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:44.253887 kubelet[2521]: E0805 22:07:44.253804 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:44.429772 systemd-networkd[1374]: cali757d4220a17: Gained IPv6LL Aug 5 22:07:45.152108 systemd[1]: Started sshd@12-10.0.0.62:22-10.0.0.1:53846.service - OpenSSH per-connection server daemon (10.0.0.1:53846). Aug 5 22:07:45.193065 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 53846 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:45.195297 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:45.200005 systemd-logind[1423]: New session 13 of user core. Aug 5 22:07:45.209761 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:07:45.349838 sshd[4493]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:45.359541 systemd[1]: sshd@12-10.0.0.62:22-10.0.0.1:53846.service: Deactivated successfully. Aug 5 22:07:45.361505 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:07:45.365514 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:07:45.372904 systemd[1]: Started sshd@13-10.0.0.62:22-10.0.0.1:53856.service - OpenSSH per-connection server daemon (10.0.0.1:53856). Aug 5 22:07:45.373922 systemd-logind[1423]: Removed session 13. Aug 5 22:07:45.405356 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 53856 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:45.406935 sshd[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:45.411658 systemd-logind[1423]: New session 14 of user core. Aug 5 22:07:45.418758 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:07:45.524977 containerd[1445]: time="2024-08-05T22:07:45.524905951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:45.525913 containerd[1445]: time="2024-08-05T22:07:45.525869067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 22:07:45.526213 containerd[1445]: time="2024-08-05T22:07:45.526187026Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:45.528723 containerd[1445]: time="2024-08-05T22:07:45.528578697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:45.530801 containerd[1445]: time="2024-08-05T22:07:45.529315094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.81448142s" Aug 5 22:07:45.530801 containerd[1445]: time="2024-08-05T22:07:45.529347054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 22:07:45.531193 containerd[1445]: time="2024-08-05T22:07:45.530990648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:07:45.537653 containerd[1445]: time="2024-08-05T22:07:45.536904626Z" level=info msg="CreateContainer within sandbox \"1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:07:45.553227 containerd[1445]: time="2024-08-05T22:07:45.553171886Z" level=info msg="CreateContainer within sandbox \"1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5c8588e8cab92256a338b9ac4d3b69ae399e70629babda26cb0f61eb98615d18\"" Aug 5 22:07:45.556694 containerd[1445]: time="2024-08-05T22:07:45.555819756Z" level=info msg="StartContainer for \"5c8588e8cab92256a338b9ac4d3b69ae399e70629babda26cb0f61eb98615d18\"" Aug 5 22:07:45.586815 systemd[1]: Started cri-containerd-5c8588e8cab92256a338b9ac4d3b69ae399e70629babda26cb0f61eb98615d18.scope - libcontainer container 5c8588e8cab92256a338b9ac4d3b69ae399e70629babda26cb0f61eb98615d18. Aug 5 22:07:45.664296 containerd[1445]: time="2024-08-05T22:07:45.664187236Z" level=info msg="StartContainer for \"5c8588e8cab92256a338b9ac4d3b69ae399e70629babda26cb0f61eb98615d18\" returns successfully" Aug 5 22:07:45.752214 sshd[4507]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:45.762282 systemd[1]: sshd@13-10.0.0.62:22-10.0.0.1:53856.service: Deactivated successfully. Aug 5 22:07:45.764201 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:07:45.765052 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:07:45.772140 systemd[1]: Started sshd@14-10.0.0.62:22-10.0.0.1:53860.service - OpenSSH per-connection server daemon (10.0.0.1:53860). Aug 5 22:07:45.773706 systemd-logind[1423]: Removed session 14. Aug 5 22:07:45.815028 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 53860 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:45.816416 sshd[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:45.820647 systemd-logind[1423]: New session 15 of user core. Aug 5 22:07:45.829083 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:07:46.273492 kubelet[2521]: I0805 22:07:46.272913 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dc555fc5c-8mlpm" podStartSLOduration=25.223559446 podStartE2EDuration="28.272894608s" podCreationTimestamp="2024-08-05 22:07:18 +0000 UTC" firstStartedPulling="2024-08-05 22:07:42.480948329 +0000 UTC m=+46.466607393" lastFinishedPulling="2024-08-05 22:07:45.530283491 +0000 UTC m=+49.515942555" observedRunningTime="2024-08-05 22:07:46.272823689 +0000 UTC m=+50.258482753" watchObservedRunningTime="2024-08-05 22:07:46.272894608 +0000 UTC m=+50.258553672" Aug 5 22:07:46.663896 containerd[1445]: time="2024-08-05T22:07:46.663837974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:46.665701 containerd[1445]: time="2024-08-05T22:07:46.664522291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 22:07:46.665701 containerd[1445]: time="2024-08-05T22:07:46.665327568Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:46.667824 containerd[1445]: time="2024-08-05T22:07:46.667772840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:46.669632 containerd[1445]: time="2024-08-05T22:07:46.668414318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.13737015s" Aug 5 22:07:46.669632 containerd[1445]: time="2024-08-05T22:07:46.668451638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 22:07:46.671383 containerd[1445]: time="2024-08-05T22:07:46.671293388Z" level=info msg="CreateContainer within sandbox \"876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:07:46.687120 containerd[1445]: time="2024-08-05T22:07:46.687069053Z" level=info msg="CreateContainer within sandbox \"876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a9b1831eb41554d7a8cdbbe27dd4baf83295a2de00675f79bc8ec729980fbd7a\"" Aug 5 22:07:46.687662 containerd[1445]: time="2024-08-05T22:07:46.687628531Z" level=info msg="StartContainer for \"a9b1831eb41554d7a8cdbbe27dd4baf83295a2de00675f79bc8ec729980fbd7a\"" Aug 5 22:07:46.730987 systemd[1]: Started cri-containerd-a9b1831eb41554d7a8cdbbe27dd4baf83295a2de00675f79bc8ec729980fbd7a.scope - libcontainer container a9b1831eb41554d7a8cdbbe27dd4baf83295a2de00675f79bc8ec729980fbd7a. Aug 5 22:07:46.764127 containerd[1445]: time="2024-08-05T22:07:46.764072826Z" level=info msg="StartContainer for \"a9b1831eb41554d7a8cdbbe27dd4baf83295a2de00675f79bc8ec729980fbd7a\" returns successfully" Aug 5 22:07:47.151681 kubelet[2521]: I0805 22:07:47.151015 2521 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:07:47.151681 kubelet[2521]: I0805 22:07:47.151044 2521 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:07:47.278120 sshd[4558]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:47.286057 kubelet[2521]: I0805 22:07:47.280043 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-slqdf" podStartSLOduration=25.068810256 podStartE2EDuration="29.280022619s" podCreationTimestamp="2024-08-05 22:07:18 +0000 UTC" firstStartedPulling="2024-08-05 22:07:42.458042512 +0000 UTC m=+46.443701576" lastFinishedPulling="2024-08-05 22:07:46.669254915 +0000 UTC m=+50.654913939" observedRunningTime="2024-08-05 22:07:47.278584103 +0000 UTC m=+51.264243167" watchObservedRunningTime="2024-08-05 22:07:47.280022619 +0000 UTC m=+51.265681643" Aug 5 22:07:47.298002 systemd[1]: sshd@14-10.0.0.62:22-10.0.0.1:53860.service: Deactivated successfully. Aug 5 22:07:47.302433 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:07:47.303980 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:07:47.316998 systemd[1]: Started sshd@15-10.0.0.62:22-10.0.0.1:53872.service - OpenSSH per-connection server daemon (10.0.0.1:53872). Aug 5 22:07:47.319866 systemd-logind[1423]: Removed session 15. Aug 5 22:07:47.360194 sshd[4644]: Accepted publickey for core from 10.0.0.1 port 53872 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:47.361591 sshd[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:47.365657 systemd-logind[1423]: New session 16 of user core. Aug 5 22:07:47.377790 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:07:47.598547 sshd[4644]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:47.609522 systemd[1]: sshd@15-10.0.0.62:22-10.0.0.1:53872.service: Deactivated successfully. Aug 5 22:07:47.611475 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:07:47.614874 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:07:47.623883 systemd[1]: Started sshd@16-10.0.0.62:22-10.0.0.1:53880.service - OpenSSH per-connection server daemon (10.0.0.1:53880). Aug 5 22:07:47.625303 systemd-logind[1423]: Removed session 16. Aug 5 22:07:47.652900 sshd[4663]: Accepted publickey for core from 10.0.0.1 port 53880 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:47.654088 sshd[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:47.657672 systemd-logind[1423]: New session 17 of user core. Aug 5 22:07:47.667773 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:07:47.775543 sshd[4663]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:47.778231 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:07:47.778910 systemd[1]: sshd@16-10.0.0.62:22-10.0.0.1:53880.service: Deactivated successfully. Aug 5 22:07:47.781371 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:07:47.783605 systemd-logind[1423]: Removed session 17. Aug 5 22:07:51.842737 kubelet[2521]: E0805 22:07:51.842644 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:52.796381 systemd[1]: Started sshd@17-10.0.0.62:22-10.0.0.1:34778.service - OpenSSH per-connection server daemon (10.0.0.1:34778). Aug 5 22:07:52.830639 sshd[4704]: Accepted publickey for core from 10.0.0.1 port 34778 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:52.831520 sshd[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:52.837679 systemd-logind[1423]: New session 18 of user core. Aug 5 22:07:52.844782 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:07:52.955458 sshd[4704]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:52.958842 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:07:52.959008 systemd[1]: sshd@17-10.0.0.62:22-10.0.0.1:34778.service: Deactivated successfully. Aug 5 22:07:52.960823 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:07:52.963373 systemd-logind[1423]: Removed session 18. Aug 5 22:07:56.068277 containerd[1445]: time="2024-08-05T22:07:56.068238901Z" level=info msg="StopPodSandbox for \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\"" Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.105 [WARNING][4747] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6s972-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f58b54dd-c480-4c97-9c37-25028bd2a7be", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84", Pod:"coredns-7db6d8ff4d-6s972", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2da93bbd69", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.105 [INFO][4747] k8s.go 608: Cleaning up netns ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.105 [INFO][4747] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" iface="eth0" netns="" Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.105 [INFO][4747] k8s.go 615: Releasing IP address(es) ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.105 [INFO][4747] utils.go 188: Calico CNI releasing IP address ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.126 [INFO][4757] ipam_plugin.go 411: Releasing address using handleID ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.126 [INFO][4757] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.126 [INFO][4757] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.136 [WARNING][4757] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.136 [INFO][4757] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.138 [INFO][4757] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:56.142147 containerd[1445]: 2024-08-05 22:07:56.140 [INFO][4747] k8s.go 621: Teardown processing complete. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:56.142147 containerd[1445]: time="2024-08-05T22:07:56.142064486Z" level=info msg="TearDown network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\" successfully" Aug 5 22:07:56.142147 containerd[1445]: time="2024-08-05T22:07:56.142089606Z" level=info msg="StopPodSandbox for \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\" returns successfully" Aug 5 22:07:56.143091 containerd[1445]: time="2024-08-05T22:07:56.143062525Z" level=info msg="RemovePodSandbox for \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\"" Aug 5 22:07:56.143153 containerd[1445]: time="2024-08-05T22:07:56.143099605Z" level=info msg="Forcibly stopping sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\"" Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.177 [WARNING][4780] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6s972-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f58b54dd-c480-4c97-9c37-25028bd2a7be", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b614d8846caebe63becdeb517408677c859c5b1e0107b1801690faac783ccb84", Pod:"coredns-7db6d8ff4d-6s972", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2da93bbd69", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.177 [INFO][4780] k8s.go 608: Cleaning up netns ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.177 [INFO][4780] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" iface="eth0" netns="" Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.177 [INFO][4780] k8s.go 615: Releasing IP address(es) ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.177 [INFO][4780] utils.go 188: Calico CNI releasing IP address ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.198 [INFO][4787] ipam_plugin.go 411: Releasing address using handleID ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.198 [INFO][4787] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.198 [INFO][4787] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.207 [WARNING][4787] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.207 [INFO][4787] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" HandleID="k8s-pod-network.6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Workload="localhost-k8s-coredns--7db6d8ff4d--6s972-eth0" Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.208 [INFO][4787] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:56.211871 containerd[1445]: 2024-08-05 22:07:56.210 [INFO][4780] k8s.go 621: Teardown processing complete. ContainerID="6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec" Aug 5 22:07:56.212377 containerd[1445]: time="2024-08-05T22:07:56.211902959Z" level=info msg="TearDown network for sandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\" successfully" Aug 5 22:07:56.223350 containerd[1445]: time="2024-08-05T22:07:56.223260659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:07:56.223440 containerd[1445]: time="2024-08-05T22:07:56.223377139Z" level=info msg="RemovePodSandbox \"6532ded5c55c398a7f85ffc8b0f5d2eecd5e5f064e38d963df70d1ca499d88ec\" returns successfully" Aug 5 22:07:56.223883 containerd[1445]: time="2024-08-05T22:07:56.223859338Z" level=info msg="StopPodSandbox for \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\"" Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.266 [WARNING][4810] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0", GenerateName:"calico-kube-controllers-dc555fc5c-", Namespace:"calico-system", SelfLink:"", UID:"fe57e78e-0c52-4bb4-bdcd-07e0068f41ee", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc555fc5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992", Pod:"calico-kube-controllers-dc555fc5c-8mlpm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali757d4220a17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.266 [INFO][4810] k8s.go 608: Cleaning up netns ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.266 [INFO][4810] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" iface="eth0" netns="" Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.266 [INFO][4810] k8s.go 615: Releasing IP address(es) ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.266 [INFO][4810] utils.go 188: Calico CNI releasing IP address ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.283 [INFO][4817] ipam_plugin.go 411: Releasing address using handleID ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.284 [INFO][4817] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.284 [INFO][4817] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.293 [WARNING][4817] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.293 [INFO][4817] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.295 [INFO][4817] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:56.299060 containerd[1445]: 2024-08-05 22:07:56.297 [INFO][4810] k8s.go 621: Teardown processing complete. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:56.299060 containerd[1445]: time="2024-08-05T22:07:56.298976121Z" level=info msg="TearDown network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\" successfully" Aug 5 22:07:56.299060 containerd[1445]: time="2024-08-05T22:07:56.298999841Z" level=info msg="StopPodSandbox for \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\" returns successfully" Aug 5 22:07:56.299530 containerd[1445]: time="2024-08-05T22:07:56.299499520Z" level=info msg="RemovePodSandbox for \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\"" Aug 5 22:07:56.299649 containerd[1445]: time="2024-08-05T22:07:56.299530120Z" level=info msg="Forcibly stopping sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\"" Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.336 [WARNING][4839] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0", GenerateName:"calico-kube-controllers-dc555fc5c-", Namespace:"calico-system", SelfLink:"", UID:"fe57e78e-0c52-4bb4-bdcd-07e0068f41ee", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc555fc5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8a3f5f08ccc4eaf29df83fce3efb0aef5dfaf1026d7fb1ff48eefeb8b48992", Pod:"calico-kube-controllers-dc555fc5c-8mlpm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali757d4220a17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.336 [INFO][4839] k8s.go 608: Cleaning up netns ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.336 [INFO][4839] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" iface="eth0" netns="" Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.336 [INFO][4839] k8s.go 615: Releasing IP address(es) ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.336 [INFO][4839] utils.go 188: Calico CNI releasing IP address ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.355 [INFO][4846] ipam_plugin.go 411: Releasing address using handleID ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.355 [INFO][4846] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.355 [INFO][4846] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.363 [WARNING][4846] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.363 [INFO][4846] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" HandleID="k8s-pod-network.5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Workload="localhost-k8s-calico--kube--controllers--dc555fc5c--8mlpm-eth0" Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.365 [INFO][4846] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:56.368590 containerd[1445]: 2024-08-05 22:07:56.367 [INFO][4839] k8s.go 621: Teardown processing complete. ContainerID="5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de" Aug 5 22:07:56.368590 containerd[1445]: time="2024-08-05T22:07:56.368569915Z" level=info msg="TearDown network for sandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\" successfully" Aug 5 22:07:56.372256 containerd[1445]: time="2024-08-05T22:07:56.372060108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:07:56.372256 containerd[1445]: time="2024-08-05T22:07:56.372180668Z" level=info msg="RemovePodSandbox \"5cf5d6c6264b4ce571379f7aa9358baf235b56314baf7e4d023d170f7dfac9de\" returns successfully" Aug 5 22:07:56.372707 containerd[1445]: time="2024-08-05T22:07:56.372647427Z" level=info msg="StopPodSandbox for \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\"" Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.405 [WARNING][4869] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--slqdf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d42a17fc-6962-44ab-95c8-1eda8d16487a", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd", Pod:"csi-node-driver-slqdf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliaaf9e930af4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.406 [INFO][4869] k8s.go 608: Cleaning up netns ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.406 [INFO][4869] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" iface="eth0" netns="" Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.406 [INFO][4869] k8s.go 615: Releasing IP address(es) ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.406 [INFO][4869] utils.go 188: Calico CNI releasing IP address ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.424 [INFO][4877] ipam_plugin.go 411: Releasing address using handleID ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.424 [INFO][4877] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.424 [INFO][4877] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.433 [WARNING][4877] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.433 [INFO][4877] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.435 [INFO][4877] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:56.438725 containerd[1445]: 2024-08-05 22:07:56.437 [INFO][4869] k8s.go 621: Teardown processing complete. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:56.439119 containerd[1445]: time="2024-08-05T22:07:56.438769467Z" level=info msg="TearDown network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\" successfully" Aug 5 22:07:56.439119 containerd[1445]: time="2024-08-05T22:07:56.438792827Z" level=info msg="StopPodSandbox for \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\" returns successfully" Aug 5 22:07:56.439612 containerd[1445]: time="2024-08-05T22:07:56.439294426Z" level=info msg="RemovePodSandbox for \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\"" Aug 5 22:07:56.439612 containerd[1445]: time="2024-08-05T22:07:56.439328426Z" level=info msg="Forcibly stopping sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\"" Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.481 [WARNING][4900] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--slqdf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d42a17fc-6962-44ab-95c8-1eda8d16487a", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876e9c5b424380deeb57f7a7fd1ded7c491f2c5b3a031b9ddb92f2830599f3cd", Pod:"csi-node-driver-slqdf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliaaf9e930af4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.481 [INFO][4900] k8s.go 608: Cleaning up netns ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.481 [INFO][4900] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" iface="eth0" netns="" Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.481 [INFO][4900] k8s.go 615: Releasing IP address(es) ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.481 [INFO][4900] utils.go 188: Calico CNI releasing IP address ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.512 [INFO][4907] ipam_plugin.go 411: Releasing address using handleID ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.512 [INFO][4907] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.513 [INFO][4907] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.522 [WARNING][4907] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.522 [INFO][4907] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" HandleID="k8s-pod-network.4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Workload="localhost-k8s-csi--node--driver--slqdf-eth0" Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.524 [INFO][4907] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:56.528442 containerd[1445]: 2024-08-05 22:07:56.526 [INFO][4900] k8s.go 621: Teardown processing complete. ContainerID="4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58" Aug 5 22:07:56.528442 containerd[1445]: time="2024-08-05T22:07:56.528267944Z" level=info msg="TearDown network for sandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\" successfully" Aug 5 22:07:56.531520 containerd[1445]: time="2024-08-05T22:07:56.531381859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:07:56.531520 containerd[1445]: time="2024-08-05T22:07:56.531440779Z" level=info msg="RemovePodSandbox \"4a8508ab2a1dd4a1209d697adbc38ac28515ce4a3aa001169c1be40162de2d58\" returns successfully" Aug 5 22:07:56.531950 containerd[1445]: time="2024-08-05T22:07:56.531924538Z" level=info msg="StopPodSandbox for \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\"" Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.569 [WARNING][4930] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55e6e3a9-eb75-4395-8ebe-8ebde1260f05", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25", Pod:"coredns-7db6d8ff4d-jmgvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie51f7a02a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.569 [INFO][4930] k8s.go 608: Cleaning up netns ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.569 [INFO][4930] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" iface="eth0" netns="" Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.570 [INFO][4930] k8s.go 615: Releasing IP address(es) ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.570 [INFO][4930] utils.go 188: Calico CNI releasing IP address ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.594 [INFO][4937] ipam_plugin.go 411: Releasing address using handleID ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.595 [INFO][4937] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.595 [INFO][4937] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.604 [WARNING][4937] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.604 [INFO][4937] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.606 [INFO][4937] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:56.611607 containerd[1445]: 2024-08-05 22:07:56.608 [INFO][4930] k8s.go 621: Teardown processing complete. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:56.612345 containerd[1445]: time="2024-08-05T22:07:56.611772233Z" level=info msg="TearDown network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\" successfully" Aug 5 22:07:56.612345 containerd[1445]: time="2024-08-05T22:07:56.611800593Z" level=info msg="StopPodSandbox for \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\" returns successfully" Aug 5 22:07:56.612590 containerd[1445]: time="2024-08-05T22:07:56.612456111Z" level=info msg="RemovePodSandbox for \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\"" Aug 5 22:07:56.612590 containerd[1445]: time="2024-08-05T22:07:56.612487711Z" level=info msg="Forcibly stopping sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\"" Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.663 [WARNING][4960] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55e6e3a9-eb75-4395-8ebe-8ebde1260f05", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de64f5088ab44110d84d2947d4b59dde2f017e45bb009f0d46c6684ed6b03a25", Pod:"coredns-7db6d8ff4d-jmgvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie51f7a02a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.664 [INFO][4960] k8s.go 608: Cleaning up netns ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.664 [INFO][4960] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" iface="eth0" netns="" Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.664 [INFO][4960] k8s.go 615: Releasing IP address(es) ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.664 [INFO][4960] utils.go 188: Calico CNI releasing IP address ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.695 [INFO][4968] ipam_plugin.go 411: Releasing address using handleID ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.695 [INFO][4968] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.696 [INFO][4968] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.705 [WARNING][4968] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.706 [INFO][4968] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" HandleID="k8s-pod-network.9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Workload="localhost-k8s-coredns--7db6d8ff4d--jmgvh-eth0" Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.707 [INFO][4968] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:56.712384 containerd[1445]: 2024-08-05 22:07:56.709 [INFO][4960] k8s.go 621: Teardown processing complete. ContainerID="9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f" Aug 5 22:07:56.712384 containerd[1445]: time="2024-08-05T22:07:56.712361290Z" level=info msg="TearDown network for sandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\" successfully" Aug 5 22:07:56.715716 containerd[1445]: time="2024-08-05T22:07:56.715668564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:07:56.715785 containerd[1445]: time="2024-08-05T22:07:56.715740244Z" level=info msg="RemovePodSandbox \"9dfca11cc3a37517a11ec2a9f3bc7e05a04a80ec4632fdcaeb18ef16ea3fc56f\" returns successfully" Aug 5 22:07:57.182060 kubelet[2521]: I0805 22:07:57.182014 2521 topology_manager.go:215] "Topology Admit Handler" podUID="70a763f8-be69-4b2c-9c87-317310306edb" podNamespace="calico-apiserver" podName="calico-apiserver-66bdd6bcd6-ch9p8" Aug 5 22:07:57.195728 systemd[1]: Created slice kubepods-besteffort-pod70a763f8_be69_4b2c_9c87_317310306edb.slice - libcontainer container kubepods-besteffort-pod70a763f8_be69_4b2c_9c87_317310306edb.slice. Aug 5 22:07:57.273783 kubelet[2521]: I0805 22:07:57.273730 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/70a763f8-be69-4b2c-9c87-317310306edb-calico-apiserver-certs\") pod \"calico-apiserver-66bdd6bcd6-ch9p8\" (UID: \"70a763f8-be69-4b2c-9c87-317310306edb\") " pod="calico-apiserver/calico-apiserver-66bdd6bcd6-ch9p8" Aug 5 22:07:57.273783 kubelet[2521]: I0805 22:07:57.273777 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkg2q\" (UniqueName: \"kubernetes.io/projected/70a763f8-be69-4b2c-9c87-317310306edb-kube-api-access-bkg2q\") pod \"calico-apiserver-66bdd6bcd6-ch9p8\" (UID: \"70a763f8-be69-4b2c-9c87-317310306edb\") " pod="calico-apiserver/calico-apiserver-66bdd6bcd6-ch9p8" Aug 5 22:07:57.375833 kubelet[2521]: E0805 22:07:57.375736 2521 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:07:57.375833 kubelet[2521]: E0805 22:07:57.375828 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70a763f8-be69-4b2c-9c87-317310306edb-calico-apiserver-certs podName:70a763f8-be69-4b2c-9c87-317310306edb nodeName:}" failed. No retries permitted until 2024-08-05 22:07:57.875809487 +0000 UTC m=+61.861468551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/70a763f8-be69-4b2c-9c87-317310306edb-calico-apiserver-certs") pod "calico-apiserver-66bdd6bcd6-ch9p8" (UID: "70a763f8-be69-4b2c-9c87-317310306edb") : secret "calico-apiserver-certs" not found Aug 5 22:07:57.878528 kubelet[2521]: E0805 22:07:57.878481 2521 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:07:57.878528 kubelet[2521]: E0805 22:07:57.878555 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70a763f8-be69-4b2c-9c87-317310306edb-calico-apiserver-certs podName:70a763f8-be69-4b2c-9c87-317310306edb nodeName:}" failed. No retries permitted until 2024-08-05 22:07:58.87853931 +0000 UTC m=+62.864198334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/70a763f8-be69-4b2c-9c87-317310306edb-calico-apiserver-certs") pod "calico-apiserver-66bdd6bcd6-ch9p8" (UID: "70a763f8-be69-4b2c-9c87-317310306edb") : secret "calico-apiserver-certs" not found Aug 5 22:07:57.979170 systemd[1]: Started sshd@18-10.0.0.62:22-10.0.0.1:34782.service - OpenSSH per-connection server daemon (10.0.0.1:34782). Aug 5 22:07:58.020659 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 34782 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:58.021655 sshd[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:58.028553 systemd-logind[1423]: New session 19 of user core. Aug 5 22:07:58.037787 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:07:58.185516 sshd[5002]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:58.188545 systemd[1]: sshd@18-10.0.0.62:22-10.0.0.1:34782.service: Deactivated successfully. Aug 5 22:07:58.192368 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:07:58.194500 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:07:58.195431 systemd-logind[1423]: Removed session 19. Aug 5 22:07:58.999264 containerd[1445]: time="2024-08-05T22:07:58.999151167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bdd6bcd6-ch9p8,Uid:70a763f8-be69-4b2c-9c87-317310306edb,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:07:59.130850 systemd-networkd[1374]: califafe872cbba: Link UP Aug 5 22:07:59.131333 systemd-networkd[1374]: califafe872cbba: Gained carrier Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.050 [INFO][5016] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0 calico-apiserver-66bdd6bcd6- calico-apiserver 70a763f8-be69-4b2c-9c87-317310306edb 1034 0 2024-08-05 22:07:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66bdd6bcd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66bdd6bcd6-ch9p8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califafe872cbba [] []}} ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Namespace="calico-apiserver" Pod="calico-apiserver-66bdd6bcd6-ch9p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.050 [INFO][5016] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Namespace="calico-apiserver" Pod="calico-apiserver-66bdd6bcd6-ch9p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.080 [INFO][5029] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" HandleID="k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Workload="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.091 [INFO][5029] ipam_plugin.go 264: Auto assigning IP ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" HandleID="k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Workload="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005c4610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66bdd6bcd6-ch9p8", "timestamp":"2024-08-05 22:07:59.080524981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.091 [INFO][5029] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.091 [INFO][5029] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.091 [INFO][5029] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.094 [INFO][5029] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.102 [INFO][5029] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.109 [INFO][5029] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.112 [INFO][5029] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.114 [INFO][5029] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.114 [INFO][5029] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.116 [INFO][5029] ipam.go 1685: Creating new handle: k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.118 [INFO][5029] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.125 [INFO][5029] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.125 [INFO][5029] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" host="localhost" Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.125 [INFO][5029] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:07:59.142388 containerd[1445]: 2024-08-05 22:07:59.125 [INFO][5029] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" HandleID="k8s-pod-network.48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Workload="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" Aug 5 22:07:59.142903 containerd[1445]: 2024-08-05 22:07:59.129 [INFO][5016] k8s.go 386: Populated endpoint ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Namespace="calico-apiserver" Pod="calico-apiserver-66bdd6bcd6-ch9p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0", GenerateName:"calico-apiserver-66bdd6bcd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"70a763f8-be69-4b2c-9c87-317310306edb", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bdd6bcd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66bdd6bcd6-ch9p8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califafe872cbba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:59.142903 containerd[1445]: 2024-08-05 22:07:59.129 [INFO][5016] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Namespace="calico-apiserver" Pod="calico-apiserver-66bdd6bcd6-ch9p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" Aug 5 22:07:59.142903 containerd[1445]: 2024-08-05 22:07:59.129 [INFO][5016] dataplane_linux.go 68: Setting the host side veth name to califafe872cbba ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Namespace="calico-apiserver" Pod="calico-apiserver-66bdd6bcd6-ch9p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" Aug 5 22:07:59.142903 containerd[1445]: 2024-08-05 22:07:59.131 [INFO][5016] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Namespace="calico-apiserver" Pod="calico-apiserver-66bdd6bcd6-ch9p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" Aug 5 22:07:59.142903 containerd[1445]: 2024-08-05 22:07:59.131 [INFO][5016] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Namespace="calico-apiserver" Pod="calico-apiserver-66bdd6bcd6-ch9p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0", GenerateName:"calico-apiserver-66bdd6bcd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"70a763f8-be69-4b2c-9c87-317310306edb", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bdd6bcd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da", Pod:"calico-apiserver-66bdd6bcd6-ch9p8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califafe872cbba", MAC:"f2:a9:59:5f:15:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:07:59.142903 containerd[1445]: 2024-08-05 22:07:59.138 [INFO][5016] k8s.go 500: Wrote updated endpoint to datastore ContainerID="48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da" Namespace="calico-apiserver" Pod="calico-apiserver-66bdd6bcd6-ch9p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66bdd6bcd6--ch9p8-eth0" Aug 5 22:07:59.163994 containerd[1445]: time="2024-08-05T22:07:59.163881257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:59.163994 containerd[1445]: time="2024-08-05T22:07:59.163952458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:59.163994 containerd[1445]: time="2024-08-05T22:07:59.163971938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:59.163994 containerd[1445]: time="2024-08-05T22:07:59.163987578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:59.191800 systemd[1]: Started cri-containerd-48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da.scope - libcontainer container 48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da. Aug 5 22:07:59.205281 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:07:59.227911 containerd[1445]: time="2024-08-05T22:07:59.227865134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bdd6bcd6-ch9p8,Uid:70a763f8-be69-4b2c-9c87-317310306edb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da\"" Aug 5 22:07:59.231188 containerd[1445]: time="2024-08-05T22:07:59.230876312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:08:00.493763 systemd-networkd[1374]: califafe872cbba: Gained IPv6LL Aug 5 22:08:00.926243 containerd[1445]: time="2024-08-05T22:08:00.926179094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:00.926811 containerd[1445]: time="2024-08-05T22:08:00.926761817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Aug 5 22:08:00.927513 containerd[1445]: time="2024-08-05T22:08:00.927480262Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:00.929512 containerd[1445]: time="2024-08-05T22:08:00.929476874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:00.930670 containerd[1445]: time="2024-08-05T22:08:00.930527840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.699617688s" Aug 5 22:08:00.930670 containerd[1445]: time="2024-08-05T22:08:00.930563000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Aug 5 22:08:00.936452 containerd[1445]: time="2024-08-05T22:08:00.936418596Z" level=info msg="CreateContainer within sandbox \"48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:08:00.959587 containerd[1445]: time="2024-08-05T22:08:00.959530615Z" level=info msg="CreateContainer within sandbox \"48369508a80eaa8f7ede09c9db89636520111dfb288bed1b7538022924ad23da\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d63288feb33b6e5861c90cc8c90ce6bc3fc07c55808ebba1e41c0a3dab08ca6\"" Aug 5 22:08:00.960420 containerd[1445]: time="2024-08-05T22:08:00.960392500Z" level=info msg="StartContainer for \"2d63288feb33b6e5861c90cc8c90ce6bc3fc07c55808ebba1e41c0a3dab08ca6\"" Aug 5 22:08:00.987793 systemd[1]: Started cri-containerd-2d63288feb33b6e5861c90cc8c90ce6bc3fc07c55808ebba1e41c0a3dab08ca6.scope - libcontainer container 2d63288feb33b6e5861c90cc8c90ce6bc3fc07c55808ebba1e41c0a3dab08ca6. Aug 5 22:08:01.019201 containerd[1445]: time="2024-08-05T22:08:01.019156931Z" level=info msg="StartContainer for \"2d63288feb33b6e5861c90cc8c90ce6bc3fc07c55808ebba1e41c0a3dab08ca6\" returns successfully" Aug 5 22:08:01.316760 kubelet[2521]: I0805 22:08:01.316646 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66bdd6bcd6-ch9p8" podStartSLOduration=2.612643239 podStartE2EDuration="4.316500872s" podCreationTimestamp="2024-08-05 22:07:57 +0000 UTC" firstStartedPulling="2024-08-05 22:07:59.229657985 +0000 UTC m=+63.215317009" lastFinishedPulling="2024-08-05 22:08:00.933515618 +0000 UTC m=+64.919174642" observedRunningTime="2024-08-05 22:08:01.314800942 +0000 UTC m=+65.300460006" watchObservedRunningTime="2024-08-05 22:08:01.316500872 +0000 UTC m=+65.302159896" Aug 5 22:08:03.200144 systemd[1]: Started sshd@19-10.0.0.62:22-10.0.0.1:53438.service - OpenSSH per-connection server daemon (10.0.0.1:53438). Aug 5 22:08:03.252899 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 53438 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:03.259497 sshd[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:03.263611 systemd-logind[1423]: New session 20 of user core. Aug 5 22:08:03.269838 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:08:03.404628 sshd[5146]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:03.407051 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:08:03.408491 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:08:03.408589 systemd[1]: sshd@19-10.0.0.62:22-10.0.0.1:53438.service: Deactivated successfully. Aug 5 22:08:03.410962 systemd-logind[1423]: Removed session 20.