Aug 5 21:51:01.942232 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 21:51:01.942254 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 21:51:01.942264 kernel: KASLR enabled Aug 5 21:51:01.942270 kernel: efi: EFI v2.7 by EDK II Aug 5 21:51:01.942275 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 5 21:51:01.942281 kernel: random: crng init done Aug 5 21:51:01.942298 kernel: ACPI: Early table checksum verification disabled Aug 5 21:51:01.942304 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 5 21:51:01.942310 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 5 21:51:01.942318 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942324 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942330 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942336 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942342 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942349 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942357 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942363 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942370 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:01.942376 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 5 21:51:01.942382 kernel: NUMA: Failed to initialise from firmware Aug 5 21:51:01.942389 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:51:01.942395 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 5 21:51:01.942401 kernel: Zone ranges: Aug 5 21:51:01.942408 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:51:01.942414 kernel: DMA32 empty Aug 5 21:51:01.942422 kernel: Normal empty Aug 5 21:51:01.942428 kernel: Movable zone start for each node Aug 5 21:51:01.942435 kernel: Early memory node ranges Aug 5 21:51:01.942441 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 5 21:51:01.942448 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 5 21:51:01.942454 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 5 21:51:01.942460 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 5 21:51:01.942467 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 5 21:51:01.942473 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 5 21:51:01.942480 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 5 21:51:01.942486 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:51:01.942493 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 5 21:51:01.942500 kernel: psci: probing for conduit method from ACPI. Aug 5 21:51:01.942507 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 21:51:01.942514 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:51:01.942523 kernel: psci: Trusted OS migration not required Aug 5 21:51:01.942529 kernel: psci: SMC Calling Convention v1.1 Aug 5 21:51:01.942536 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 5 21:51:01.942544 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:51:01.942551 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:51:01.942558 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 5 21:51:01.942564 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:51:01.942571 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:51:01.942578 kernel: CPU features: detected: Hardware dirty bit management Aug 5 21:51:01.942584 kernel: CPU features: detected: Spectre-v4 Aug 5 21:51:01.942591 kernel: CPU features: detected: Spectre-BHB Aug 5 21:51:01.942598 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 21:51:01.942605 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 21:51:01.942613 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 21:51:01.942619 kernel: alternatives: applying boot alternatives Aug 5 21:51:01.942627 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:51:01.942634 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:51:01.942641 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:51:01.942648 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:51:01.942654 kernel: Fallback order for Node 0: 0 Aug 5 21:51:01.942661 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 5 21:51:01.942668 kernel: Policy zone: DMA Aug 5 21:51:01.942674 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:51:01.942681 kernel: software IO TLB: area num 4. Aug 5 21:51:01.942689 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 5 21:51:01.942696 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Aug 5 21:51:01.942703 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 21:51:01.942710 kernel: trace event string verifier disabled Aug 5 21:51:01.942716 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:51:01.942723 kernel: rcu: RCU event tracing is enabled. Aug 5 21:51:01.942730 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 21:51:01.942737 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:51:01.942764 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:51:01.942771 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:51:01.942778 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 21:51:01.942785 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:51:01.942794 kernel: GICv3: 256 SPIs implemented Aug 5 21:51:01.942800 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:51:01.942807 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:51:01.942814 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 21:51:01.942821 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 5 21:51:01.942827 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 5 21:51:01.942834 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 21:51:01.942841 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Aug 5 21:51:01.942848 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 5 21:51:01.942855 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 5 21:51:01.942862 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:51:01.942870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:51:01.942877 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 21:51:01.942883 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 21:51:01.942890 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 21:51:01.942897 kernel: arm-pv: using stolen time PV Aug 5 21:51:01.942904 kernel: Console: colour dummy device 80x25 Aug 5 21:51:01.942911 kernel: ACPI: Core revision 20230628 Aug 5 21:51:01.942918 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 21:51:01.942925 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:51:01.942932 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:51:01.942940 kernel: SELinux: Initializing. Aug 5 21:51:01.942947 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:51:01.942954 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:51:01.942961 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:51:01.942968 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:51:01.942975 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:51:01.942982 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:51:01.942989 kernel: Platform MSI: ITS@0x8080000 domain created Aug 5 21:51:01.942995 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 5 21:51:01.943004 kernel: Remapping and enabling EFI services. Aug 5 21:51:01.943011 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:51:01.943017 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:51:01.943024 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 5 21:51:01.943031 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 5 21:51:01.943038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:51:01.943045 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 21:51:01.943052 kernel: Detected PIPT I-cache on CPU2 Aug 5 21:51:01.943058 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 5 21:51:01.943065 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 5 21:51:01.943074 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:51:01.943080 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 5 21:51:01.943093 kernel: Detected PIPT I-cache on CPU3 Aug 5 21:51:01.943101 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 5 21:51:01.943108 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 5 21:51:01.943116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:51:01.943123 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 5 21:51:01.943130 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 21:51:01.943137 kernel: SMP: Total of 4 processors activated. Aug 5 21:51:01.943146 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:51:01.943153 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 21:51:01.943160 kernel: CPU features: detected: Common not Private translations Aug 5 21:51:01.943167 kernel: CPU features: detected: CRC32 instructions Aug 5 21:51:01.943175 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 5 21:51:01.943182 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 21:51:01.943189 kernel: CPU features: detected: LSE atomic instructions Aug 5 21:51:01.943197 kernel: CPU features: detected: Privileged Access Never Aug 5 21:51:01.943205 kernel: CPU features: detected: RAS Extension Support Aug 5 21:51:01.943213 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 5 21:51:01.943220 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:51:01.943227 kernel: alternatives: applying system-wide alternatives Aug 5 21:51:01.943234 kernel: devtmpfs: initialized Aug 5 21:51:01.943242 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:51:01.943249 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 21:51:01.943257 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:51:01.943264 kernel: SMBIOS 3.0.0 present. Aug 5 21:51:01.943272 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 5 21:51:01.943280 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:51:01.943293 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:51:01.943302 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:51:01.943309 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:51:01.943316 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:51:01.943324 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Aug 5 21:51:01.943331 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:51:01.943339 kernel: cpuidle: using governor menu Aug 5 21:51:01.943348 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:51:01.943356 kernel: ASID allocator initialised with 32768 entries Aug 5 21:51:01.943363 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:51:01.943370 kernel: Serial: AMBA PL011 UART driver Aug 5 21:51:01.943378 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 21:51:01.943385 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 21:51:01.943393 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:51:01.943401 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:51:01.943408 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:51:01.943417 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:51:01.943424 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:51:01.943432 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:51:01.943439 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:51:01.943446 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:51:01.943454 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:51:01.943461 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:51:01.943468 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:51:01.943476 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:51:01.943484 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:51:01.943492 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:51:01.943499 kernel: ACPI: Interpreter enabled Aug 5 21:51:01.943506 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:51:01.943513 kernel: ACPI: MCFG table detected, 1 entries Aug 5 21:51:01.943521 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 5 21:51:01.943528 kernel: printk: console [ttyAMA0] enabled Aug 5 21:51:01.943535 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 21:51:01.943670 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 21:51:01.943771 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 21:51:01.943842 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 21:51:01.943907 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 5 21:51:01.943970 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 5 21:51:01.943980 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 5 21:51:01.943988 kernel: PCI host bridge to bus 0000:00 Aug 5 21:51:01.944059 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 5 21:51:01.944123 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 21:51:01.944182 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 5 21:51:01.944245 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 21:51:01.944335 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 5 21:51:01.944415 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 21:51:01.944486 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 5 21:51:01.944567 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 5 21:51:01.944647 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 21:51:01.944723 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 21:51:01.944821 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 5 21:51:01.944904 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 5 21:51:01.944967 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 5 21:51:01.945027 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 21:51:01.945091 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 5 21:51:01.945101 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 21:51:01.945109 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 21:51:01.945117 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 21:51:01.945124 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 21:51:01.945132 kernel: iommu: Default domain type: Translated Aug 5 21:51:01.945139 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:51:01.945146 kernel: efivars: Registered efivars operations Aug 5 21:51:01.945154 kernel: vgaarb: loaded Aug 5 21:51:01.945163 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:51:01.945170 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:51:01.945178 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:51:01.945185 kernel: pnp: PnP ACPI init Aug 5 21:51:01.945260 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 5 21:51:01.945271 kernel: pnp: PnP ACPI: found 1 devices Aug 5 21:51:01.945279 kernel: NET: Registered PF_INET protocol family Aug 5 21:51:01.945294 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:51:01.945304 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:51:01.945311 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:51:01.945319 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:51:01.945326 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:51:01.945334 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:51:01.945341 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:51:01.945348 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:51:01.945356 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:51:01.945363 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:51:01.945372 kernel: kvm [1]: HYP mode not available Aug 5 21:51:01.945379 kernel: Initialise system trusted keyrings Aug 5 21:51:01.945386 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:51:01.945394 kernel: Key type asymmetric registered Aug 5 21:51:01.945401 kernel: Asymmetric key parser 'x509' registered Aug 5 21:51:01.945408 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:51:01.945415 kernel: io scheduler mq-deadline registered Aug 5 21:51:01.945422 kernel: io scheduler kyber registered Aug 5 21:51:01.945429 kernel: io scheduler bfq registered Aug 5 21:51:01.945438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 21:51:01.945445 kernel: ACPI: button: Power Button [PWRB] Aug 5 21:51:01.945453 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 21:51:01.945523 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 5 21:51:01.945534 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:51:01.945541 kernel: thunder_xcv, ver 1.0 Aug 5 21:51:01.945548 kernel: thunder_bgx, ver 1.0 Aug 5 21:51:01.945556 kernel: nicpf, ver 1.0 Aug 5 21:51:01.945563 kernel: nicvf, ver 1.0 Aug 5 21:51:01.945639 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:51:01.945704 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:51:01 UTC (1722894661) Aug 5 21:51:01.945714 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:51:01.945721 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 5 21:51:01.945729 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:51:01.945736 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:51:01.945762 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:51:01.945770 kernel: Segment Routing with IPv6 Aug 5 21:51:01.945780 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:51:01.945787 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:51:01.945795 kernel: Key type dns_resolver registered Aug 5 21:51:01.945802 kernel: registered taskstats version 1 Aug 5 21:51:01.945809 kernel: Loading compiled-in X.509 certificates Aug 5 21:51:01.945816 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 21:51:01.945824 kernel: Key type .fscrypt registered Aug 5 21:51:01.945831 kernel: Key type fscrypt-provisioning registered Aug 5 21:51:01.945838 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:51:01.945847 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:51:01.945854 kernel: ima: No architecture policies found Aug 5 21:51:01.945861 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:51:01.945869 kernel: clk: Disabling unused clocks Aug 5 21:51:01.945876 kernel: Freeing unused kernel memory: 39040K Aug 5 21:51:01.945883 kernel: Run /init as init process Aug 5 21:51:01.945890 kernel: with arguments: Aug 5 21:51:01.945897 kernel: /init Aug 5 21:51:01.945904 kernel: with environment: Aug 5 21:51:01.945913 kernel: HOME=/ Aug 5 21:51:01.945921 kernel: TERM=linux Aug 5 21:51:01.945928 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:51:01.945937 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:51:01.945947 systemd[1]: Detected virtualization kvm. Aug 5 21:51:01.945955 systemd[1]: Detected architecture arm64. Aug 5 21:51:01.945962 systemd[1]: Running in initrd. Aug 5 21:51:01.945972 systemd[1]: No hostname configured, using default hostname. Aug 5 21:51:01.945980 systemd[1]: Hostname set to . Aug 5 21:51:01.945988 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:51:01.945996 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:51:01.946004 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:51:01.946012 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:51:01.946021 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:51:01.946029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:51:01.946039 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:51:01.946047 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:51:01.946057 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:51:01.946065 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:51:01.946073 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:51:01.946081 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:51:01.946089 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:51:01.946098 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:51:01.946106 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:51:01.946114 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:51:01.946122 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:51:01.946130 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:51:01.946138 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:51:01.946145 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:51:01.946153 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:51:01.946161 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:51:01.946170 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:51:01.946178 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:51:01.946187 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:51:01.946195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:51:01.946202 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:51:01.946210 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:51:01.946218 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:51:01.946226 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:51:01.946235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:51:01.946243 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:51:01.946251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:51:01.946259 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:51:01.946268 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:51:01.946277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:01.946311 systemd-journald[238]: Collecting audit messages is disabled. Aug 5 21:51:01.946331 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:51:01.946339 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:51:01.946350 systemd-journald[238]: Journal started Aug 5 21:51:01.946368 systemd-journald[238]: Runtime Journal (/run/log/journal/26cf5753a167486e9776d4e98a933c49) is 5.9M, max 47.3M, 41.4M free. Aug 5 21:51:01.932759 systemd-modules-load[239]: Inserted module 'overlay' Aug 5 21:51:01.948221 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:51:01.948242 kernel: Bridge firewalling registered Aug 5 21:51:01.949511 systemd-modules-load[239]: Inserted module 'br_netfilter' Aug 5 21:51:01.950509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:51:01.958917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:51:01.960577 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:51:01.962442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:51:01.965705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:51:01.973540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:51:01.975089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:51:01.977780 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:51:01.979070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:51:01.988875 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:51:01.991111 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:51:01.998805 dracut-cmdline[275]: dracut-dracut-053 Aug 5 21:51:02.001279 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:51:02.017818 systemd-resolved[277]: Positive Trust Anchors: Aug 5 21:51:02.017835 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:51:02.017866 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:51:02.022351 systemd-resolved[277]: Defaulting to hostname 'linux'. Aug 5 21:51:02.023292 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:51:02.026816 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:51:02.066773 kernel: SCSI subsystem initialized Aug 5 21:51:02.071760 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:51:02.078768 kernel: iscsi: registered transport (tcp) Aug 5 21:51:02.092010 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:51:02.092069 kernel: QLogic iSCSI HBA Driver Aug 5 21:51:02.136597 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:51:02.147901 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:51:02.165585 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:51:02.165638 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:51:02.165650 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:51:02.215777 kernel: raid6: neonx8 gen() 15770 MB/s Aug 5 21:51:02.232782 kernel: raid6: neonx4 gen() 15588 MB/s Aug 5 21:51:02.249762 kernel: raid6: neonx2 gen() 13209 MB/s Aug 5 21:51:02.266766 kernel: raid6: neonx1 gen() 10437 MB/s Aug 5 21:51:02.283761 kernel: raid6: int64x8 gen() 6936 MB/s Aug 5 21:51:02.300763 kernel: raid6: int64x4 gen() 7337 MB/s Aug 5 21:51:02.317761 kernel: raid6: int64x2 gen() 6120 MB/s Aug 5 21:51:02.334903 kernel: raid6: int64x1 gen() 5050 MB/s Aug 5 21:51:02.334918 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Aug 5 21:51:02.352820 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Aug 5 21:51:02.352840 kernel: raid6: using neon recovery algorithm Aug 5 21:51:02.360769 kernel: xor: measuring software checksum speed Aug 5 21:51:02.361757 kernel: 8regs : 19854 MB/sec Aug 5 21:51:02.363158 kernel: 32regs : 19654 MB/sec Aug 5 21:51:02.363175 kernel: arm64_neon : 27234 MB/sec Aug 5 21:51:02.363185 kernel: xor: using function: arm64_neon (27234 MB/sec) Aug 5 21:51:02.417770 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:51:02.434058 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:51:02.452429 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:51:02.479982 systemd-udevd[460]: Using default interface naming scheme 'v255'. Aug 5 21:51:02.483301 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:51:02.500946 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:51:02.513773 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Aug 5 21:51:02.541995 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:51:02.549937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:51:02.592274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:51:02.606938 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:51:02.621519 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:51:02.623209 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:51:02.625969 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:51:02.627051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:51:02.635920 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:51:02.644119 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:51:02.650772 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 5 21:51:02.656075 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 21:51:02.656178 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 21:51:02.656190 kernel: GPT:9289727 != 19775487 Aug 5 21:51:02.656199 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 21:51:02.656209 kernel: GPT:9289727 != 19775487 Aug 5 21:51:02.656218 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 21:51:02.656228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:51:02.651888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:51:02.652046 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:51:02.653565 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:51:02.654753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:51:02.654891 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:02.659514 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:51:02.673117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:51:02.684773 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) Aug 5 21:51:02.687773 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (518) Aug 5 21:51:02.688734 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:02.694600 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 21:51:02.702790 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 21:51:02.710836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 21:51:02.715107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 21:51:02.716385 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 21:51:02.729909 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:51:02.731794 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:51:02.739066 disk-uuid[550]: Primary Header is updated. Aug 5 21:51:02.739066 disk-uuid[550]: Secondary Entries is updated. Aug 5 21:51:02.739066 disk-uuid[550]: Secondary Header is updated. Aug 5 21:51:02.744777 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:51:02.761242 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:51:03.763388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:51:03.763477 disk-uuid[551]: The operation has completed successfully. Aug 5 21:51:03.784414 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:51:03.784509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:51:03.808895 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:51:03.811938 sh[574]: Success Aug 5 21:51:03.825331 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:51:03.866246 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:51:03.868068 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:51:03.868971 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:51:03.880653 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 21:51:03.880705 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:51:03.880716 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:51:03.881511 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:51:03.882155 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:51:03.885472 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:51:03.886809 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:51:03.893893 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:51:03.895427 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:51:03.904825 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:03.904865 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:51:03.904876 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:51:03.908022 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:51:03.916256 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:51:03.918026 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:03.924196 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:51:03.930955 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:51:03.986506 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:51:04.004729 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:51:04.034712 systemd-networkd[760]: lo: Link UP Aug 5 21:51:04.034724 systemd-networkd[760]: lo: Gained carrier Aug 5 21:51:04.035429 systemd-networkd[760]: Enumeration completed Aug 5 21:51:04.035947 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:04.035950 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:51:04.037239 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:51:04.038505 systemd[1]: Reached target network.target - Network. Aug 5 21:51:04.038865 systemd-networkd[760]: eth0: Link UP Aug 5 21:51:04.038869 systemd-networkd[760]: eth0: Gained carrier Aug 5 21:51:04.038877 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:04.054794 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 21:51:04.061875 ignition[669]: Ignition 2.19.0 Aug 5 21:51:04.061887 ignition[669]: Stage: fetch-offline Aug 5 21:51:04.061923 ignition[669]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:04.061932 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:04.062025 ignition[669]: parsed url from cmdline: "" Aug 5 21:51:04.062028 ignition[669]: no config URL provided Aug 5 21:51:04.062033 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:51:04.062040 ignition[669]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:51:04.062064 ignition[669]: op(1): [started] loading QEMU firmware config module Aug 5 21:51:04.062070 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 21:51:04.073168 ignition[669]: op(1): [finished] loading QEMU firmware config module Aug 5 21:51:04.108997 ignition[669]: parsing config with SHA512: 0e9025c4e8e2e6688cad0a57d2d385d792c01072950757202b1ff917be6fac5ea41cad383472749524c5ae7249f5c3298e13cde704902e30b03217c1741bf793 Aug 5 21:51:04.113242 unknown[669]: fetched base config from "system" Aug 5 21:51:04.113253 unknown[669]: fetched user config from "qemu" Aug 5 21:51:04.113784 ignition[669]: fetch-offline: fetch-offline passed Aug 5 21:51:04.115568 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:51:04.113843 ignition[669]: Ignition finished successfully Aug 5 21:51:04.117520 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 21:51:04.121892 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:51:04.133022 ignition[771]: Ignition 2.19.0 Aug 5 21:51:04.133032 ignition[771]: Stage: kargs Aug 5 21:51:04.133207 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:04.133216 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:04.134128 ignition[771]: kargs: kargs passed Aug 5 21:51:04.136685 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:51:04.134180 ignition[771]: Ignition finished successfully Aug 5 21:51:04.149912 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:51:04.160143 ignition[780]: Ignition 2.19.0 Aug 5 21:51:04.160153 ignition[780]: Stage: disks Aug 5 21:51:04.160328 ignition[780]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:04.160338 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:04.162921 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:51:04.161396 ignition[780]: disks: disks passed Aug 5 21:51:04.164379 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:51:04.161450 ignition[780]: Ignition finished successfully Aug 5 21:51:04.165886 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:51:04.167394 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:51:04.169185 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:51:04.170661 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:51:04.188949 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:51:04.203482 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 21:51:04.207389 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:51:04.210397 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:51:04.260763 kernel: EXT4-fs (vda9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 21:51:04.260921 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:51:04.262212 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:51:04.276849 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:51:04.278648 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:51:04.280699 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 21:51:04.280778 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:51:04.280805 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:51:04.286763 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:51:04.289037 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:51:04.294229 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Aug 5 21:51:04.294259 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:04.294269 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:51:04.295963 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:51:04.300768 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:51:04.311012 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:51:04.356242 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:51:04.360900 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:51:04.364678 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:51:04.367774 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:51:04.451189 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:51:04.462846 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:51:04.464498 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:51:04.473778 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:04.493083 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:51:04.496869 ignition[913]: INFO : Ignition 2.19.0 Aug 5 21:51:04.496869 ignition[913]: INFO : Stage: mount Aug 5 21:51:04.499284 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:04.499284 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:04.499284 ignition[913]: INFO : mount: mount passed Aug 5 21:51:04.499284 ignition[913]: INFO : Ignition finished successfully Aug 5 21:51:04.500166 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:51:04.511870 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:51:04.878732 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:51:04.895968 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:51:04.903799 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Aug 5 21:51:04.903840 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:04.903851 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:51:04.905763 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:51:04.907771 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:51:04.909212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:51:04.937797 ignition[943]: INFO : Ignition 2.19.0 Aug 5 21:51:04.937797 ignition[943]: INFO : Stage: files Aug 5 21:51:04.939338 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:04.939338 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:04.939338 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:51:04.946196 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:51:04.946196 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:51:04.946196 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:51:04.946196 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:51:04.946196 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:51:04.946196 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:51:04.946196 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:51:04.944750 unknown[943]: wrote ssh authorized keys file for user: core Aug 5 21:51:04.988127 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 21:51:05.024901 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:51:05.024901 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 5 21:51:05.029202 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 5 21:51:05.295602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 5 21:51:05.377617 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:51:05.379492 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Aug 5 21:51:05.481895 systemd-networkd[760]: eth0: Gained IPv6LL Aug 5 21:51:05.611547 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 5 21:51:05.861789 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:51:05.861789 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 5 21:51:05.865229 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:51:05.866909 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:51:05.866909 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 5 21:51:05.866909 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 5 21:51:05.866909 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 21:51:05.866909 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 21:51:05.866909 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 5 21:51:05.866909 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 21:51:05.902545 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 21:51:05.906343 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 21:51:05.907823 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 21:51:05.907823 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:51:05.907823 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:51:05.907823 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:51:05.907823 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:51:05.907823 ignition[943]: INFO : files: files passed Aug 5 21:51:05.907823 ignition[943]: INFO : Ignition finished successfully Aug 5 21:51:05.909339 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:51:05.919968 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:51:05.922327 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:51:05.926642 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:51:05.926790 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:51:05.931763 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 21:51:05.935089 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:51:05.935089 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:51:05.937989 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:51:05.940682 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:51:05.942076 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:51:05.949988 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:51:05.987009 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:51:05.987129 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:51:05.989188 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:51:05.990896 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:51:05.992599 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:51:05.993420 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:51:06.009196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:51:06.014893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:51:06.023998 systemd[1]: Stopped target network.target - Network. Aug 5 21:51:06.024935 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:51:06.026674 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:51:06.028645 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:51:06.030353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:51:06.030476 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:51:06.032731 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:51:06.034768 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:51:06.036347 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:51:06.037912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:51:06.039694 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:51:06.041583 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:51:06.043308 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:51:06.045096 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:51:06.046996 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:51:06.048639 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:51:06.050091 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:51:06.050220 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:51:06.052460 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:51:06.054335 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:51:06.056521 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:51:06.059827 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:51:06.061033 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:51:06.061162 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:51:06.063867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:51:06.063991 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:51:06.065965 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:51:06.067537 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:51:06.070837 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:51:06.072057 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:51:06.074069 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:51:06.075573 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:51:06.075665 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:51:06.077142 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:51:06.077217 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:51:06.078637 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:51:06.078767 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:51:06.080452 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:51:06.080545 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:51:06.094916 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:51:06.096484 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:51:06.097516 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:51:06.099251 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:51:06.100188 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:51:06.100328 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:51:06.101507 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:51:06.108960 ignition[997]: INFO : Ignition 2.19.0 Aug 5 21:51:06.108960 ignition[997]: INFO : Stage: umount Aug 5 21:51:06.108960 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:06.108960 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:06.108960 ignition[997]: INFO : umount: umount passed Aug 5 21:51:06.101612 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:51:06.119622 ignition[997]: INFO : Ignition finished successfully Aug 5 21:51:06.108805 systemd-networkd[760]: eth0: DHCPv6 lease lost Aug 5 21:51:06.112131 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:51:06.112653 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:51:06.112735 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:51:06.114194 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:51:06.114287 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:51:06.117192 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:51:06.117288 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:51:06.119526 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:51:06.119636 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:51:06.124161 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:51:06.124199 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:51:06.128501 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:51:06.128564 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:51:06.130197 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:51:06.130246 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:51:06.131828 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:51:06.131869 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:51:06.133436 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:51:06.133480 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:51:06.142918 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:51:06.143759 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:51:06.143819 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:51:06.145687 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:51:06.145731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:51:06.147601 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:51:06.147645 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:51:06.149285 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:51:06.149326 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:51:06.151133 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:51:06.162393 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:51:06.162524 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:51:06.163888 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:51:06.163963 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:51:06.165785 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:51:06.165853 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:51:06.166881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:51:06.166912 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:51:06.168759 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:51:06.168805 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:51:06.171800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:51:06.171841 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:51:06.173571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:51:06.173615 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:51:06.181868 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:51:06.183034 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:51:06.183088 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:51:06.185185 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 21:51:06.185229 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:51:06.187166 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:51:06.187208 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:51:06.189070 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:51:06.189117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:06.191660 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:51:06.191765 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:51:06.276804 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:51:06.276938 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:51:06.279049 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:51:06.280176 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:51:06.280240 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:51:06.293881 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:51:06.299840 systemd[1]: Switching root. Aug 5 21:51:06.318895 systemd-journald[238]: Journal stopped Aug 5 21:51:07.138248 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Aug 5 21:51:07.138308 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 21:51:07.138322 kernel: SELinux: policy capability open_perms=1 Aug 5 21:51:07.138332 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 21:51:07.138343 kernel: SELinux: policy capability always_check_network=0 Aug 5 21:51:07.138360 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 21:51:07.138370 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 21:51:07.138380 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 21:51:07.138390 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 21:51:07.138402 kernel: audit: type=1403 audit(1722894666.535:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 21:51:07.138414 systemd[1]: Successfully loaded SELinux policy in 34.304ms. Aug 5 21:51:07.138431 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.704ms. Aug 5 21:51:07.138444 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:51:07.138455 systemd[1]: Detected virtualization kvm. Aug 5 21:51:07.138468 systemd[1]: Detected architecture arm64. Aug 5 21:51:07.138478 systemd[1]: Detected first boot. Aug 5 21:51:07.138489 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:51:07.138500 zram_generator::config[1040]: No configuration found. Aug 5 21:51:07.138512 systemd[1]: Populated /etc with preset unit settings. Aug 5 21:51:07.138522 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 21:51:07.138533 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 21:51:07.138543 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 21:51:07.138557 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 21:51:07.138568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 21:51:07.138585 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 21:51:07.138596 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 21:51:07.138607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 21:51:07.138619 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 21:51:07.138629 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 21:51:07.138640 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 21:51:07.138652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:51:07.138665 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:51:07.138676 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 21:51:07.138688 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 21:51:07.138699 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 21:51:07.138710 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:51:07.138721 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 21:51:07.138732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:51:07.138754 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 21:51:07.138765 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 21:51:07.138779 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 21:51:07.138790 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 21:51:07.138801 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:51:07.138812 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:51:07.138823 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:51:07.138834 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:51:07.138845 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 21:51:07.138855 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 21:51:07.138868 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:51:07.138879 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:51:07.138890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:51:07.138902 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 21:51:07.138913 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 21:51:07.138924 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 21:51:07.138934 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 21:51:07.138945 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 21:51:07.138956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 21:51:07.138969 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 21:51:07.138984 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 21:51:07.138995 systemd[1]: Reached target machines.target - Containers. Aug 5 21:51:07.139005 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 21:51:07.139016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:51:07.139027 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:51:07.139037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 21:51:07.139048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:51:07.139061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:51:07.139072 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:51:07.139083 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 21:51:07.139093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:51:07.139105 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 21:51:07.139116 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 21:51:07.139127 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 21:51:07.139143 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 21:51:07.139155 kernel: fuse: init (API version 7.39) Aug 5 21:51:07.139166 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 21:51:07.139176 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:51:07.139187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:51:07.139199 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 21:51:07.139209 kernel: loop: module loaded Aug 5 21:51:07.139220 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 21:51:07.139231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:51:07.139241 kernel: ACPI: bus type drm_connector registered Aug 5 21:51:07.139251 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 21:51:07.139271 systemd[1]: Stopped verity-setup.service. Aug 5 21:51:07.139304 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 21:51:07.139316 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 21:51:07.139329 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 21:51:07.139340 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 21:51:07.139368 systemd-journald[1106]: Collecting audit messages is disabled. Aug 5 21:51:07.139402 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 21:51:07.139422 systemd-journald[1106]: Journal started Aug 5 21:51:07.139447 systemd-journald[1106]: Runtime Journal (/run/log/journal/26cf5753a167486e9776d4e98a933c49) is 5.9M, max 47.3M, 41.4M free. Aug 5 21:51:06.936309 systemd[1]: Queued start job for default target multi-user.target. Aug 5 21:51:06.949516 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 21:51:06.949887 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 21:51:07.141047 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 21:51:07.144779 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:51:07.146772 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 21:51:07.148105 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:51:07.149541 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 21:51:07.149678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 21:51:07.151117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:51:07.151250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:51:07.152527 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:51:07.152655 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:51:07.155106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:51:07.155246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:51:07.156618 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 21:51:07.156911 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 21:51:07.158119 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:51:07.158263 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:51:07.159531 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:51:07.160948 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 21:51:07.162360 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 21:51:07.174455 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 21:51:07.181864 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 21:51:07.183835 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 21:51:07.184869 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 21:51:07.184911 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:51:07.186730 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 21:51:07.188836 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 21:51:07.190839 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 21:51:07.191881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:51:07.193279 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 21:51:07.195195 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 21:51:07.196329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:51:07.197184 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 21:51:07.198224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:51:07.202932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:51:07.207903 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 21:51:07.210556 systemd-journald[1106]: Time spent on flushing to /var/log/journal/26cf5753a167486e9776d4e98a933c49 is 18.578ms for 858 entries. Aug 5 21:51:07.210556 systemd-journald[1106]: System Journal (/var/log/journal/26cf5753a167486e9776d4e98a933c49) is 8.0M, max 195.6M, 187.6M free. Aug 5 21:51:07.243180 systemd-journald[1106]: Received client request to flush runtime journal. Aug 5 21:51:07.243224 kernel: loop0: detected capacity change from 0 to 193208 Aug 5 21:51:07.243238 kernel: block loop0: the capability attribute has been deprecated. Aug 5 21:51:07.243409 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 21:51:07.211860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:51:07.218098 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:51:07.222196 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 21:51:07.225062 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 21:51:07.226782 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 21:51:07.228292 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 21:51:07.237530 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:51:07.242113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 21:51:07.248930 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 21:51:07.251939 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 21:51:07.256221 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 21:51:07.262795 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Aug 5 21:51:07.262812 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Aug 5 21:51:07.267180 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:51:07.271949 kernel: loop1: detected capacity change from 0 to 59688 Aug 5 21:51:07.274168 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 21:51:07.280944 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 21:51:07.282918 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 21:51:07.283685 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 21:51:07.307562 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 21:51:07.314963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:51:07.317758 kernel: loop2: detected capacity change from 0 to 113712 Aug 5 21:51:07.332488 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Aug 5 21:51:07.332504 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Aug 5 21:51:07.336932 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:51:07.358862 kernel: loop3: detected capacity change from 0 to 193208 Aug 5 21:51:07.366606 kernel: loop4: detected capacity change from 0 to 59688 Aug 5 21:51:07.371119 kernel: loop5: detected capacity change from 0 to 113712 Aug 5 21:51:07.374448 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 21:51:07.374850 (sd-merge)[1179]: Merged extensions into '/usr'. Aug 5 21:51:07.378193 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 21:51:07.378211 systemd[1]: Reloading... Aug 5 21:51:07.419177 zram_generator::config[1201]: No configuration found. Aug 5 21:51:07.507134 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 21:51:07.531695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:51:07.571360 systemd[1]: Reloading finished in 192 ms. Aug 5 21:51:07.599172 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 21:51:07.600731 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 21:51:07.613040 systemd[1]: Starting ensure-sysext.service... Aug 5 21:51:07.615146 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:51:07.626382 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Aug 5 21:51:07.626396 systemd[1]: Reloading... Aug 5 21:51:07.662776 zram_generator::config[1263]: No configuration found. Aug 5 21:51:07.668440 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 21:51:07.668698 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 21:51:07.669409 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 21:51:07.669628 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Aug 5 21:51:07.669679 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Aug 5 21:51:07.672070 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:51:07.672082 systemd-tmpfiles[1238]: Skipping /boot Aug 5 21:51:07.679466 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:51:07.679482 systemd-tmpfiles[1238]: Skipping /boot Aug 5 21:51:07.748848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:51:07.787992 systemd[1]: Reloading finished in 161 ms. Aug 5 21:51:07.805423 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 21:51:07.815162 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:51:07.822618 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:51:07.825456 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 21:51:07.827733 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 21:51:07.831015 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:51:07.838828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:51:07.840992 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 21:51:07.844821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:51:07.846419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:51:07.849258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:51:07.865201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:51:07.866285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:51:07.870548 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 21:51:07.872504 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 21:51:07.874161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:51:07.874313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:51:07.876005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:51:07.876528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:51:07.878370 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:51:07.878492 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:51:07.880551 systemd-udevd[1305]: Using default interface naming scheme 'v255'. Aug 5 21:51:07.886152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:51:07.892977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:51:07.895705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:51:07.901249 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:51:07.902306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:51:07.904341 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 21:51:07.906380 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 21:51:07.908276 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 21:51:07.911313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:51:07.911466 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:51:07.913283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:51:07.913416 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:51:07.915232 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:51:07.915386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:51:07.916939 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:51:07.924820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:51:07.933080 augenrules[1346]: No rules Aug 5 21:51:07.936136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:51:07.942326 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:51:07.945110 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:51:07.948815 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:51:07.949852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:51:07.954233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:51:07.955251 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:51:07.956050 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 21:51:07.959519 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:51:07.961190 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 21:51:07.962612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:51:07.962769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:51:07.964528 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:51:07.964649 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:51:07.966938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:51:07.967068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:51:07.969798 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:51:07.969930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:51:07.978795 systemd[1]: Finished ensure-sysext.service. Aug 5 21:51:07.982795 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1348) Aug 5 21:51:07.996321 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 5 21:51:08.005760 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1336) Aug 5 21:51:08.006430 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:51:08.006493 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:51:08.017960 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 21:51:08.018515 systemd-resolved[1303]: Positive Trust Anchors: Aug 5 21:51:08.018534 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:51:08.018565 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:51:08.027783 systemd-resolved[1303]: Defaulting to hostname 'linux'. Aug 5 21:51:08.031541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:51:08.032930 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:51:08.041435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 21:51:08.049909 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 21:51:08.054681 systemd-networkd[1370]: lo: Link UP Aug 5 21:51:08.054694 systemd-networkd[1370]: lo: Gained carrier Aug 5 21:51:08.055696 systemd-networkd[1370]: Enumeration completed Aug 5 21:51:08.055812 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:51:08.057347 systemd[1]: Reached target network.target - Network. Aug 5 21:51:08.059424 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 21:51:08.063475 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:08.063487 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:51:08.064187 systemd-networkd[1370]: eth0: Link UP Aug 5 21:51:08.064195 systemd-networkd[1370]: eth0: Gained carrier Aug 5 21:51:08.064209 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:08.082820 systemd-networkd[1370]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 21:51:08.083820 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 21:51:08.084006 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Aug 5 21:51:08.085686 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 21:51:08.085859 systemd-timesyncd[1384]: Initial clock synchronization to Mon 2024-08-05 21:51:07.979907 UTC. Aug 5 21:51:08.087630 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 21:51:08.089245 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 21:51:08.118025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:51:08.130138 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 21:51:08.132987 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 21:51:08.155957 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:51:08.160293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:08.188275 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 21:51:08.189682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:51:08.190772 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:51:08.191793 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 21:51:08.192889 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 21:51:08.194176 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 21:51:08.195314 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 21:51:08.196458 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 21:51:08.197725 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 21:51:08.197773 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:51:08.198544 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:51:08.199973 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 21:51:08.202506 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 21:51:08.213763 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 21:51:08.215924 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 21:51:08.217399 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 21:51:08.218531 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:51:08.219449 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:51:08.220344 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:51:08.220377 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:51:08.221399 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 21:51:08.223471 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 21:51:08.225900 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:51:08.227903 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 21:51:08.230071 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 21:51:08.230992 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 21:51:08.232078 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 21:51:08.236887 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 21:51:08.239537 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 21:51:08.245763 jq[1411]: false Aug 5 21:51:08.244945 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 21:51:08.251754 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 21:51:08.253371 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 21:51:08.254034 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 21:51:08.255116 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 21:51:08.256712 dbus-daemon[1410]: [system] SELinux support is enabled Aug 5 21:51:08.259001 extend-filesystems[1412]: Found loop3 Aug 5 21:51:08.259901 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 21:51:08.260102 extend-filesystems[1412]: Found loop4 Aug 5 21:51:08.263504 extend-filesystems[1412]: Found loop5 Aug 5 21:51:08.263504 extend-filesystems[1412]: Found vda Aug 5 21:51:08.263504 extend-filesystems[1412]: Found vda1 Aug 5 21:51:08.263504 extend-filesystems[1412]: Found vda2 Aug 5 21:51:08.263504 extend-filesystems[1412]: Found vda3 Aug 5 21:51:08.263504 extend-filesystems[1412]: Found usr Aug 5 21:51:08.263504 extend-filesystems[1412]: Found vda4 Aug 5 21:51:08.263504 extend-filesystems[1412]: Found vda6 Aug 5 21:51:08.263504 extend-filesystems[1412]: Found vda7 Aug 5 21:51:08.263504 extend-filesystems[1412]: Found vda9 Aug 5 21:51:08.263504 extend-filesystems[1412]: Checking size of /dev/vda9 Aug 5 21:51:08.261485 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 21:51:08.280028 jq[1424]: true Aug 5 21:51:08.264319 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 21:51:08.278501 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 21:51:08.279364 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 21:51:08.279652 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 21:51:08.279834 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 21:51:08.282232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 21:51:08.282678 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 21:51:08.282967 extend-filesystems[1412]: Resized partition /dev/vda9 Aug 5 21:51:08.297851 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 21:51:08.297880 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1366) Aug 5 21:51:08.297938 extend-filesystems[1435]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 21:51:08.299959 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 21:51:08.299984 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 21:51:08.303011 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 21:51:08.303033 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 21:51:08.308691 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 21:51:08.309330 systemd-logind[1419]: New seat seat0. Aug 5 21:51:08.314226 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 21:51:08.316312 tar[1434]: linux-arm64/helm Aug 5 21:51:08.318396 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 21:51:08.326696 jq[1436]: true Aug 5 21:51:08.331049 update_engine[1423]: I0805 21:51:08.329903 1423 main.cc:92] Flatcar Update Engine starting Aug 5 21:51:08.384437 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 21:51:08.338730 systemd[1]: Started update-engine.service - Update Engine. Aug 5 21:51:08.384576 update_engine[1423]: I0805 21:51:08.333057 1423 update_check_scheduler.cc:74] Next update check in 3m59s Aug 5 21:51:08.341923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 21:51:08.385312 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 21:51:08.385312 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 21:51:08.385312 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 21:51:08.394874 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Aug 5 21:51:08.397604 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:51:08.386149 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 21:51:08.387804 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 21:51:08.392698 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 21:51:08.394710 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 21:51:08.397315 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 21:51:08.535163 containerd[1437]: time="2024-08-05T21:51:08.535073280Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 21:51:08.560364 containerd[1437]: time="2024-08-05T21:51:08.560249560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 21:51:08.560868 containerd[1437]: time="2024-08-05T21:51:08.560602920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:08.562009 containerd[1437]: time="2024-08-05T21:51:08.561967680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:51:08.562280 containerd[1437]: time="2024-08-05T21:51:08.562141200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.562572520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.562597440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.562693280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.562757640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.562770640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.562832640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.563023280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.563039880Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 21:51:08.563080 containerd[1437]: time="2024-08-05T21:51:08.563049400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563588 containerd[1437]: time="2024-08-05T21:51:08.563505840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:51:08.563678 containerd[1437]: time="2024-08-05T21:51:08.563662880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 21:51:08.563929 containerd[1437]: time="2024-08-05T21:51:08.563856880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 21:51:08.564000 containerd[1437]: time="2024-08-05T21:51:08.563986320Z" level=info msg="metadata content store policy set" policy=shared Aug 5 21:51:08.568348 containerd[1437]: time="2024-08-05T21:51:08.568321440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 21:51:08.568487 containerd[1437]: time="2024-08-05T21:51:08.568430160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 21:51:08.568487 containerd[1437]: time="2024-08-05T21:51:08.568447920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 21:51:08.568712 containerd[1437]: time="2024-08-05T21:51:08.568620600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.568655480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.568843520Z" level=info msg="NRI interface is disabled by configuration." Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.568863040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.568990720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569007160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569020760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569033320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569047040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569136200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569148440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569160920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569175080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569188040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 21:51:08.569509 containerd[1437]: time="2024-08-05T21:51:08.569200120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 21:51:08.569812 containerd[1437]: time="2024-08-05T21:51:08.569211920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 21:51:08.569812 containerd[1437]: time="2024-08-05T21:51:08.569325600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 21:51:08.569812 containerd[1437]: time="2024-08-05T21:51:08.569637160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 21:51:08.569812 containerd[1437]: time="2024-08-05T21:51:08.569679400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.569812 containerd[1437]: time="2024-08-05T21:51:08.569693640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 21:51:08.569812 containerd[1437]: time="2024-08-05T21:51:08.569716720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 21:51:08.569935 containerd[1437]: time="2024-08-05T21:51:08.569849240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.569935 containerd[1437]: time="2024-08-05T21:51:08.569865560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.569935 containerd[1437]: time="2024-08-05T21:51:08.569878320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.569935 containerd[1437]: time="2024-08-05T21:51:08.569890440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.569935 containerd[1437]: time="2024-08-05T21:51:08.569903880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.569935 containerd[1437]: time="2024-08-05T21:51:08.569916480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.569935 containerd[1437]: time="2024-08-05T21:51:08.569928680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570054 containerd[1437]: time="2024-08-05T21:51:08.569940000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570054 containerd[1437]: time="2024-08-05T21:51:08.569953600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 21:51:08.570215 containerd[1437]: time="2024-08-05T21:51:08.570097400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570215 containerd[1437]: time="2024-08-05T21:51:08.570120240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570215 containerd[1437]: time="2024-08-05T21:51:08.570133000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570215 containerd[1437]: time="2024-08-05T21:51:08.570145960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570215 containerd[1437]: time="2024-08-05T21:51:08.570158280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570215 containerd[1437]: time="2024-08-05T21:51:08.570174000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570215 containerd[1437]: time="2024-08-05T21:51:08.570185480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570215 containerd[1437]: time="2024-08-05T21:51:08.570196040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 21:51:08.570631 containerd[1437]: time="2024-08-05T21:51:08.570565200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 21:51:08.570631 containerd[1437]: time="2024-08-05T21:51:08.570625520Z" level=info msg="Connect containerd service" Aug 5 21:51:08.570862 containerd[1437]: time="2024-08-05T21:51:08.570651400Z" level=info msg="using legacy CRI server" Aug 5 21:51:08.570862 containerd[1437]: time="2024-08-05T21:51:08.570658120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 21:51:08.570862 containerd[1437]: time="2024-08-05T21:51:08.570812880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 21:51:08.571441 containerd[1437]: time="2024-08-05T21:51:08.571393080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:51:08.571491 containerd[1437]: time="2024-08-05T21:51:08.571449640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 21:51:08.571491 containerd[1437]: time="2024-08-05T21:51:08.571468760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 21:51:08.571491 containerd[1437]: time="2024-08-05T21:51:08.571479760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 21:51:08.571561 containerd[1437]: time="2024-08-05T21:51:08.571491720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 21:51:08.571902 containerd[1437]: time="2024-08-05T21:51:08.571618000Z" level=info msg="Start subscribing containerd event" Aug 5 21:51:08.571902 containerd[1437]: time="2024-08-05T21:51:08.571798840Z" level=info msg="Start recovering state" Aug 5 21:51:08.571902 containerd[1437]: time="2024-08-05T21:51:08.571863440Z" level=info msg="Start event monitor" Aug 5 21:51:08.571902 containerd[1437]: time="2024-08-05T21:51:08.571874480Z" level=info msg="Start snapshots syncer" Aug 5 21:51:08.573377 containerd[1437]: time="2024-08-05T21:51:08.571884120Z" level=info msg="Start cni network conf syncer for default" Aug 5 21:51:08.573377 containerd[1437]: time="2024-08-05T21:51:08.572137560Z" level=info msg="Start streaming server" Aug 5 21:51:08.573377 containerd[1437]: time="2024-08-05T21:51:08.572008480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 21:51:08.573377 containerd[1437]: time="2024-08-05T21:51:08.572218720Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 21:51:08.573377 containerd[1437]: time="2024-08-05T21:51:08.572276840Z" level=info msg="containerd successfully booted in 0.039768s" Aug 5 21:51:08.572375 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 21:51:08.692037 tar[1434]: linux-arm64/LICENSE Aug 5 21:51:08.692037 tar[1434]: linux-arm64/README.md Aug 5 21:51:08.695674 sshd_keygen[1426]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 21:51:08.702866 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 21:51:08.715081 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 21:51:08.727070 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 21:51:08.732612 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 21:51:08.732819 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 21:51:08.735806 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 21:51:08.747013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 21:51:08.760302 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 21:51:08.762435 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 21:51:08.763639 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 21:51:10.088931 systemd-networkd[1370]: eth0: Gained IPv6LL Aug 5 21:51:10.091275 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 21:51:10.092970 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 21:51:10.108000 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 21:51:10.110436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:10.112432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 21:51:10.126984 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 21:51:10.127217 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 21:51:10.129348 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 21:51:10.140096 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 21:51:10.613832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:10.615381 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 21:51:10.616479 systemd[1]: Startup finished in 575ms (kernel) + 4.828s (initrd) + 4.115s (userspace) = 9.518s. Aug 5 21:51:10.620361 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:51:11.119677 kubelet[1524]: E0805 21:51:11.119587 1524 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:51:11.122410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:51:11.122572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:51:14.930442 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 21:51:14.931526 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:55542.service - OpenSSH per-connection server daemon (10.0.0.1:55542). Aug 5 21:51:14.998100 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 55542 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:15.000111 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:15.013930 systemd-logind[1419]: New session 1 of user core. Aug 5 21:51:15.014960 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 21:51:15.024041 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 21:51:15.034264 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 21:51:15.036570 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 21:51:15.046558 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:15.134301 systemd[1541]: Queued start job for default target default.target. Aug 5 21:51:15.144060 systemd[1541]: Created slice app.slice - User Application Slice. Aug 5 21:51:15.144223 systemd[1541]: Reached target paths.target - Paths. Aug 5 21:51:15.144289 systemd[1541]: Reached target timers.target - Timers. Aug 5 21:51:15.145603 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 21:51:15.156318 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 21:51:15.156446 systemd[1541]: Reached target sockets.target - Sockets. Aug 5 21:51:15.156460 systemd[1541]: Reached target basic.target - Basic System. Aug 5 21:51:15.156499 systemd[1541]: Reached target default.target - Main User Target. Aug 5 21:51:15.156524 systemd[1541]: Startup finished in 104ms. Aug 5 21:51:15.156810 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 21:51:15.158170 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 21:51:15.214132 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:55548.service - OpenSSH per-connection server daemon (10.0.0.1:55548). Aug 5 21:51:15.266517 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 55548 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:15.267903 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:15.271951 systemd-logind[1419]: New session 2 of user core. Aug 5 21:51:15.282934 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 21:51:15.337656 sshd[1552]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:15.351526 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:55548.service: Deactivated successfully. Aug 5 21:51:15.353460 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 21:51:15.355435 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Aug 5 21:51:15.374617 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:55550.service - OpenSSH per-connection server daemon (10.0.0.1:55550). Aug 5 21:51:15.375652 systemd-logind[1419]: Removed session 2. Aug 5 21:51:15.405397 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 55550 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:15.406763 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:15.410510 systemd-logind[1419]: New session 3 of user core. Aug 5 21:51:15.423968 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 21:51:15.472931 sshd[1559]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:15.485217 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:55550.service: Deactivated successfully. Aug 5 21:51:15.487971 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 21:51:15.489254 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Aug 5 21:51:15.491040 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:55562.service - OpenSSH per-connection server daemon (10.0.0.1:55562). Aug 5 21:51:15.492203 systemd-logind[1419]: Removed session 3. Aug 5 21:51:15.530351 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 55562 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:15.531723 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:15.535507 systemd-logind[1419]: New session 4 of user core. Aug 5 21:51:15.544945 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 21:51:15.599080 sshd[1566]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:15.612283 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:55562.service: Deactivated successfully. Aug 5 21:51:15.613683 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 21:51:15.615819 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Aug 5 21:51:15.621119 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:55578.service - OpenSSH per-connection server daemon (10.0.0.1:55578). Aug 5 21:51:15.621962 systemd-logind[1419]: Removed session 4. Aug 5 21:51:15.659945 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 55578 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:15.660312 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:15.664284 systemd-logind[1419]: New session 5 of user core. Aug 5 21:51:15.674906 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 21:51:15.737342 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 21:51:15.737583 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:51:15.751528 sudo[1576]: pam_unix(sudo:session): session closed for user root Aug 5 21:51:15.753266 sshd[1573]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:15.771233 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:55578.service: Deactivated successfully. Aug 5 21:51:15.772589 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 21:51:15.775549 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Aug 5 21:51:15.792073 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:55588.service - OpenSSH per-connection server daemon (10.0.0.1:55588). Aug 5 21:51:15.796624 systemd-logind[1419]: Removed session 5. Aug 5 21:51:15.828252 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 55588 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:15.829569 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:15.833583 systemd-logind[1419]: New session 6 of user core. Aug 5 21:51:15.848900 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 21:51:15.899667 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 21:51:15.899937 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:51:15.906119 sudo[1585]: pam_unix(sudo:session): session closed for user root Aug 5 21:51:15.910420 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 21:51:15.910653 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:51:15.930442 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 21:51:15.931543 auditctl[1588]: No rules Aug 5 21:51:15.931897 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 21:51:15.932062 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 21:51:15.934124 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:51:15.962402 augenrules[1606]: No rules Aug 5 21:51:15.963680 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:51:15.965043 sudo[1584]: pam_unix(sudo:session): session closed for user root Aug 5 21:51:15.966497 sshd[1581]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:15.978959 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:55588.service: Deactivated successfully. Aug 5 21:51:15.980246 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 21:51:15.984175 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Aug 5 21:51:15.985195 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:55594.service - OpenSSH per-connection server daemon (10.0.0.1:55594). Aug 5 21:51:15.986102 systemd-logind[1419]: Removed session 6. Aug 5 21:51:16.020977 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 55594 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:16.022186 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:16.026138 systemd-logind[1419]: New session 7 of user core. Aug 5 21:51:16.034897 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 21:51:16.086366 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 21:51:16.087009 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:51:16.185991 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 21:51:16.186067 (dockerd)[1628]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 21:51:16.430042 dockerd[1628]: time="2024-08-05T21:51:16.429909109Z" level=info msg="Starting up" Aug 5 21:51:16.517868 dockerd[1628]: time="2024-08-05T21:51:16.517831472Z" level=info msg="Loading containers: start." Aug 5 21:51:16.605767 kernel: Initializing XFRM netlink socket Aug 5 21:51:16.665448 systemd-networkd[1370]: docker0: Link UP Aug 5 21:51:16.680216 dockerd[1628]: time="2024-08-05T21:51:16.680096344Z" level=info msg="Loading containers: done." Aug 5 21:51:16.732996 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1482538255-merged.mount: Deactivated successfully. Aug 5 21:51:16.733982 dockerd[1628]: time="2024-08-05T21:51:16.733907494Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 21:51:16.734142 dockerd[1628]: time="2024-08-05T21:51:16.734111908Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 21:51:16.734242 dockerd[1628]: time="2024-08-05T21:51:16.734224878Z" level=info msg="Daemon has completed initialization" Aug 5 21:51:16.761954 dockerd[1628]: time="2024-08-05T21:51:16.761889315Z" level=info msg="API listen on /run/docker.sock" Aug 5 21:51:16.762806 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 21:51:17.342082 containerd[1437]: time="2024-08-05T21:51:17.342023733Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 21:51:17.962882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361293973.mount: Deactivated successfully. Aug 5 21:51:20.374132 containerd[1437]: time="2024-08-05T21:51:20.374079762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:20.374765 containerd[1437]: time="2024-08-05T21:51:20.374650461Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=31601518" Aug 5 21:51:20.376778 containerd[1437]: time="2024-08-05T21:51:20.375524475Z" level=info msg="ImageCreate event name:\"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:20.379215 containerd[1437]: time="2024-08-05T21:51:20.379177491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:20.380214 containerd[1437]: time="2024-08-05T21:51:20.380185257Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"31598316\" in 3.038116554s" Aug 5 21:51:20.380249 containerd[1437]: time="2024-08-05T21:51:20.380224218Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\"" Aug 5 21:51:20.398395 containerd[1437]: time="2024-08-05T21:51:20.398360052Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 21:51:21.372855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 21:51:21.381940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:21.469672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:21.473918 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:51:21.516221 kubelet[1839]: E0805 21:51:21.516161 1839 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:51:21.520386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:51:21.520538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:51:22.751394 containerd[1437]: time="2024-08-05T21:51:22.751336707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:22.751936 containerd[1437]: time="2024-08-05T21:51:22.751895482Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=29018272" Aug 5 21:51:22.752754 containerd[1437]: time="2024-08-05T21:51:22.752691105Z" level=info msg="ImageCreate event name:\"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:22.756153 containerd[1437]: time="2024-08-05T21:51:22.756092766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:22.756851 containerd[1437]: time="2024-08-05T21:51:22.756815320Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"30505537\" in 2.358420607s" Aug 5 21:51:22.756917 containerd[1437]: time="2024-08-05T21:51:22.756854069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\"" Aug 5 21:51:22.777007 containerd[1437]: time="2024-08-05T21:51:22.776939830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 21:51:23.972287 containerd[1437]: time="2024-08-05T21:51:23.971808810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:23.972287 containerd[1437]: time="2024-08-05T21:51:23.972233703Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=15534522" Aug 5 21:51:23.973215 containerd[1437]: time="2024-08-05T21:51:23.973154024Z" level=info msg="ImageCreate event name:\"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:23.976034 containerd[1437]: time="2024-08-05T21:51:23.975993547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:23.978275 containerd[1437]: time="2024-08-05T21:51:23.977635395Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"17021805\" in 1.200652862s" Aug 5 21:51:23.978275 containerd[1437]: time="2024-08-05T21:51:23.977677589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\"" Aug 5 21:51:23.996978 containerd[1437]: time="2024-08-05T21:51:23.996930122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 21:51:24.945490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594213194.mount: Deactivated successfully. Aug 5 21:51:26.321968 containerd[1437]: time="2024-08-05T21:51:26.321918066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:26.322852 containerd[1437]: time="2024-08-05T21:51:26.322624101Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=24977921" Aug 5 21:51:26.323766 containerd[1437]: time="2024-08-05T21:51:26.323709138Z" level=info msg="ImageCreate event name:\"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:26.325759 containerd[1437]: time="2024-08-05T21:51:26.325693066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:26.326445 containerd[1437]: time="2024-08-05T21:51:26.326408209Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"24976938\" in 2.329432573s" Aug 5 21:51:26.326445 containerd[1437]: time="2024-08-05T21:51:26.326444959Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\"" Aug 5 21:51:26.345787 containerd[1437]: time="2024-08-05T21:51:26.345746218Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 21:51:26.747982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285598863.mount: Deactivated successfully. Aug 5 21:51:26.752518 containerd[1437]: time="2024-08-05T21:51:26.752478971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:26.753020 containerd[1437]: time="2024-08-05T21:51:26.752985359Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Aug 5 21:51:26.754026 containerd[1437]: time="2024-08-05T21:51:26.753797768Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:26.756154 containerd[1437]: time="2024-08-05T21:51:26.756116000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:26.757331 containerd[1437]: time="2024-08-05T21:51:26.757303057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 411.516495ms" Aug 5 21:51:26.757527 containerd[1437]: time="2024-08-05T21:51:26.757435436Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 21:51:26.775762 containerd[1437]: time="2024-08-05T21:51:26.775686770Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 21:51:27.259099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826067639.mount: Deactivated successfully. Aug 5 21:51:29.928604 containerd[1437]: time="2024-08-05T21:51:29.928555204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:29.929674 containerd[1437]: time="2024-08-05T21:51:29.929608600Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Aug 5 21:51:29.930241 containerd[1437]: time="2024-08-05T21:51:29.930205054Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:29.933212 containerd[1437]: time="2024-08-05T21:51:29.933150518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:29.934472 containerd[1437]: time="2024-08-05T21:51:29.934353497Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.158632489s" Aug 5 21:51:29.934472 containerd[1437]: time="2024-08-05T21:51:29.934387426Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 21:51:29.953352 containerd[1437]: time="2024-08-05T21:51:29.953316022Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 21:51:30.530715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565539313.mount: Deactivated successfully. Aug 5 21:51:30.895016 containerd[1437]: time="2024-08-05T21:51:30.894902040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:30.898823 containerd[1437]: time="2024-08-05T21:51:30.898782733Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Aug 5 21:51:30.900444 containerd[1437]: time="2024-08-05T21:51:30.900394202Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:30.902574 containerd[1437]: time="2024-08-05T21:51:30.902540563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:30.903541 containerd[1437]: time="2024-08-05T21:51:30.903493001Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 949.98899ms" Aug 5 21:51:30.903541 containerd[1437]: time="2024-08-05T21:51:30.903531010Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Aug 5 21:51:31.566817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 21:51:31.575955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:31.666446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:31.670595 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:51:31.713414 kubelet[2033]: E0805 21:51:31.713316 2033 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:51:31.716390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:51:31.716513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:51:34.818081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:34.833020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:34.847511 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-7.scope)... Aug 5 21:51:34.847528 systemd[1]: Reloading... Aug 5 21:51:34.916777 zram_generator::config[2092]: No configuration found. Aug 5 21:51:35.012083 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:51:35.068619 systemd[1]: Reloading finished in 220 ms. Aug 5 21:51:35.109476 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:51:35.109546 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:51:35.109816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:35.112498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:35.208521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:35.212942 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:51:35.255615 kubelet[2133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:51:35.255615 kubelet[2133]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:51:35.255615 kubelet[2133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:51:35.255989 kubelet[2133]: I0805 21:51:35.255648 2133 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:51:36.555889 kubelet[2133]: I0805 21:51:36.555847 2133 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:51:36.555889 kubelet[2133]: I0805 21:51:36.555879 2133 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:51:36.556219 kubelet[2133]: I0805 21:51:36.556103 2133 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:51:36.612103 kubelet[2133]: I0805 21:51:36.612069 2133 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:51:36.618685 kubelet[2133]: E0805 21:51:36.618616 2133 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.626663 kubelet[2133]: W0805 21:51:36.626621 2133 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:51:36.627436 kubelet[2133]: I0805 21:51:36.627404 2133 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:51:36.627643 kubelet[2133]: I0805 21:51:36.627619 2133 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:51:36.627851 kubelet[2133]: I0805 21:51:36.627834 2133 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:51:36.627936 kubelet[2133]: I0805 21:51:36.627864 2133 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:51:36.627936 kubelet[2133]: I0805 21:51:36.627875 2133 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:51:36.628118 kubelet[2133]: I0805 21:51:36.628101 2133 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:51:36.631034 kubelet[2133]: I0805 21:51:36.631011 2133 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:51:36.631066 kubelet[2133]: I0805 21:51:36.631040 2133 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:51:36.631292 kubelet[2133]: I0805 21:51:36.631132 2133 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:51:36.631292 kubelet[2133]: I0805 21:51:36.631146 2133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:51:36.632264 kubelet[2133]: W0805 21:51:36.632060 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.632264 kubelet[2133]: E0805 21:51:36.632120 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.632616 kubelet[2133]: I0805 21:51:36.632594 2133 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:51:36.633870 kubelet[2133]: W0805 21:51:36.633816 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.633941 kubelet[2133]: E0805 21:51:36.633876 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.634637 kubelet[2133]: W0805 21:51:36.634599 2133 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 21:51:36.635570 kubelet[2133]: I0805 21:51:36.635270 2133 server.go:1232] "Started kubelet" Aug 5 21:51:36.636275 kubelet[2133]: I0805 21:51:36.636242 2133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:51:36.639187 kubelet[2133]: I0805 21:51:36.638057 2133 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:51:36.639187 kubelet[2133]: I0805 21:51:36.638159 2133 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:51:36.639187 kubelet[2133]: I0805 21:51:36.638222 2133 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:51:36.639187 kubelet[2133]: I0805 21:51:36.638250 2133 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:51:36.639187 kubelet[2133]: W0805 21:51:36.638509 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.639187 kubelet[2133]: E0805 21:51:36.638552 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.639187 kubelet[2133]: I0805 21:51:36.638716 2133 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:51:36.639187 kubelet[2133]: I0805 21:51:36.638929 2133 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:51:36.639187 kubelet[2133]: I0805 21:51:36.639066 2133 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:51:36.639607 kubelet[2133]: E0805 21:51:36.639539 2133 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Aug 5 21:51:36.641347 kubelet[2133]: E0805 21:51:36.640555 2133 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:51:36.641347 kubelet[2133]: E0805 21:51:36.640592 2133 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:51:36.643808 kubelet[2133]: E0805 21:51:36.643695 2133 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17e8f3992a590766", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 51, 36, 635238246, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 51, 36, 635238246, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.97:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.97:6443: connect: connection refused'(may retry after sleeping) Aug 5 21:51:36.656977 kubelet[2133]: I0805 21:51:36.656947 2133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:51:36.658229 kubelet[2133]: I0805 21:51:36.658209 2133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:51:36.658335 kubelet[2133]: I0805 21:51:36.658324 2133 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:51:36.658422 kubelet[2133]: I0805 21:51:36.658412 2133 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:51:36.658535 kubelet[2133]: E0805 21:51:36.658521 2133 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:51:36.659079 kubelet[2133]: W0805 21:51:36.659053 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.659213 kubelet[2133]: E0805 21:51:36.659198 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:36.665659 kubelet[2133]: I0805 21:51:36.665626 2133 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:51:36.665659 kubelet[2133]: I0805 21:51:36.665645 2133 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:51:36.665659 kubelet[2133]: I0805 21:51:36.665663 2133 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:51:36.667830 kubelet[2133]: I0805 21:51:36.667797 2133 policy_none.go:49] "None policy: Start" Aug 5 21:51:36.668421 kubelet[2133]: I0805 21:51:36.668393 2133 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:51:36.668421 kubelet[2133]: I0805 21:51:36.668419 2133 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:51:36.674819 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 21:51:36.700092 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 21:51:36.702634 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 21:51:36.712569 kubelet[2133]: I0805 21:51:36.712383 2133 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:51:36.712672 kubelet[2133]: I0805 21:51:36.712660 2133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:51:36.713222 kubelet[2133]: E0805 21:51:36.713198 2133 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 21:51:36.739850 kubelet[2133]: I0805 21:51:36.739812 2133 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:51:36.741981 kubelet[2133]: E0805 21:51:36.741961 2133 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Aug 5 21:51:36.759240 kubelet[2133]: I0805 21:51:36.759192 2133 topology_manager.go:215] "Topology Admit Handler" podUID="4c5fc40a88d7a6c786b49127e3a872db" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 21:51:36.760246 kubelet[2133]: I0805 21:51:36.760220 2133 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 21:51:36.760850 kubelet[2133]: I0805 21:51:36.760829 2133 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 21:51:36.766965 systemd[1]: Created slice kubepods-burstable-pod4c5fc40a88d7a6c786b49127e3a872db.slice - libcontainer container kubepods-burstable-pod4c5fc40a88d7a6c786b49127e3a872db.slice. Aug 5 21:51:36.778911 systemd[1]: Created slice kubepods-burstable-pod09d96cdeded1d5a51a9712d8a1a0b54a.slice - libcontainer container kubepods-burstable-pod09d96cdeded1d5a51a9712d8a1a0b54a.slice. Aug 5 21:51:36.783421 systemd[1]: Created slice kubepods-burstable-pod0cc03c154af91f38c5530287ae9cc549.slice - libcontainer container kubepods-burstable-pod0cc03c154af91f38c5530287ae9cc549.slice. Aug 5 21:51:36.840013 kubelet[2133]: I0805 21:51:36.839916 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c5fc40a88d7a6c786b49127e3a872db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c5fc40a88d7a6c786b49127e3a872db\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:36.840202 kubelet[2133]: I0805 21:51:36.840168 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c5fc40a88d7a6c786b49127e3a872db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c5fc40a88d7a6c786b49127e3a872db\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:36.840410 kubelet[2133]: I0805 21:51:36.840298 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:36.840410 kubelet[2133]: I0805 21:51:36.840336 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:36.840410 kubelet[2133]: I0805 21:51:36.840358 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 21:51:36.840410 kubelet[2133]: I0805 21:51:36.840378 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c5fc40a88d7a6c786b49127e3a872db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c5fc40a88d7a6c786b49127e3a872db\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:36.840410 kubelet[2133]: I0805 21:51:36.840395 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:36.840535 kubelet[2133]: E0805 21:51:36.840089 2133 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Aug 5 21:51:36.840652 kubelet[2133]: I0805 21:51:36.840589 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:36.840732 kubelet[2133]: I0805 21:51:36.840699 2133 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:36.943565 kubelet[2133]: I0805 21:51:36.943508 2133 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:51:36.944861 kubelet[2133]: E0805 21:51:36.944824 2133 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Aug 5 21:51:37.080009 kubelet[2133]: E0805 21:51:37.079970 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:37.080900 containerd[1437]: time="2024-08-05T21:51:37.080851576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c5fc40a88d7a6c786b49127e3a872db,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:37.081930 kubelet[2133]: E0805 21:51:37.081905 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:37.082435 containerd[1437]: time="2024-08-05T21:51:37.082281567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:37.085930 kubelet[2133]: E0805 21:51:37.085908 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:37.086396 containerd[1437]: time="2024-08-05T21:51:37.086363444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:37.241704 kubelet[2133]: E0805 21:51:37.241598 2133 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Aug 5 21:51:37.346140 kubelet[2133]: I0805 21:51:37.346098 2133 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:51:37.346453 kubelet[2133]: E0805 21:51:37.346432 2133 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Aug 5 21:51:37.506524 kubelet[2133]: W0805 21:51:37.506380 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:37.506524 kubelet[2133]: E0805 21:51:37.506445 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:37.553468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3973932180.mount: Deactivated successfully. Aug 5 21:51:37.556147 containerd[1437]: time="2024-08-05T21:51:37.556107325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:37.557282 containerd[1437]: time="2024-08-05T21:51:37.557258643Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:37.558315 containerd[1437]: time="2024-08-05T21:51:37.558285960Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 5 21:51:37.558783 containerd[1437]: time="2024-08-05T21:51:37.558651605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:51:37.559333 kubelet[2133]: W0805 21:51:37.559225 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:37.559591 kubelet[2133]: E0805 21:51:37.559345 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:37.559797 containerd[1437]: time="2024-08-05T21:51:37.559729826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:37.560596 containerd[1437]: time="2024-08-05T21:51:37.560567643Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:37.560981 containerd[1437]: time="2024-08-05T21:51:37.560943845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:51:37.563495 containerd[1437]: time="2024-08-05T21:51:37.563448458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:37.565319 containerd[1437]: time="2024-08-05T21:51:37.565134128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 484.16015ms" Aug 5 21:51:37.567851 containerd[1437]: time="2024-08-05T21:51:37.567816485Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 481.374226ms" Aug 5 21:51:37.568471 containerd[1437]: time="2024-08-05T21:51:37.568410098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 486.052036ms" Aug 5 21:51:37.707837 containerd[1437]: time="2024-08-05T21:51:37.707670169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:37.707837 containerd[1437]: time="2024-08-05T21:51:37.707733269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:37.707837 containerd[1437]: time="2024-08-05T21:51:37.707765619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:37.707837 containerd[1437]: time="2024-08-05T21:51:37.707781934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:37.708884 containerd[1437]: time="2024-08-05T21:51:37.708576964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:37.708884 containerd[1437]: time="2024-08-05T21:51:37.708629308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:37.708884 containerd[1437]: time="2024-08-05T21:51:37.708646342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:37.708884 containerd[1437]: time="2024-08-05T21:51:37.708655779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:37.710873 containerd[1437]: time="2024-08-05T21:51:37.710625920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:37.710873 containerd[1437]: time="2024-08-05T21:51:37.710675105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:37.710873 containerd[1437]: time="2024-08-05T21:51:37.710784151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:37.710873 containerd[1437]: time="2024-08-05T21:51:37.710802185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:37.735003 systemd[1]: Started cri-containerd-7da1b7c11694438adad3f44cea62f586310dafca6d8e564eccdbab7f91c20705.scope - libcontainer container 7da1b7c11694438adad3f44cea62f586310dafca6d8e564eccdbab7f91c20705. Aug 5 21:51:37.736450 systemd[1]: Started cri-containerd-bfeb6db6f147ef2b5fdc51e396fc503fa6d56970fb9ad6603c11dcce6c750b60.scope - libcontainer container bfeb6db6f147ef2b5fdc51e396fc503fa6d56970fb9ad6603c11dcce6c750b60. Aug 5 21:51:37.737978 systemd[1]: Started cri-containerd-c943a6a0e0fd300f55ee011fc1ee6f6bcdeb614f6e08fa139a71d117b28107ba.scope - libcontainer container c943a6a0e0fd300f55ee011fc1ee6f6bcdeb614f6e08fa139a71d117b28107ba. Aug 5 21:51:37.771323 containerd[1437]: time="2024-08-05T21:51:37.770658812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c5fc40a88d7a6c786b49127e3a872db,Namespace:kube-system,Attempt:0,} returns sandbox id \"7da1b7c11694438adad3f44cea62f586310dafca6d8e564eccdbab7f91c20705\"" Aug 5 21:51:37.772386 kubelet[2133]: E0805 21:51:37.772360 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:37.773436 containerd[1437]: time="2024-08-05T21:51:37.773309619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c943a6a0e0fd300f55ee011fc1ee6f6bcdeb614f6e08fa139a71d117b28107ba\"" Aug 5 21:51:37.774650 kubelet[2133]: E0805 21:51:37.774531 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:37.777918 containerd[1437]: time="2024-08-05T21:51:37.776299799Z" level=info msg="CreateContainer within sandbox \"7da1b7c11694438adad3f44cea62f586310dafca6d8e564eccdbab7f91c20705\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 21:51:37.778274 containerd[1437]: time="2024-08-05T21:51:37.778252905Z" level=info msg="CreateContainer within sandbox \"c943a6a0e0fd300f55ee011fc1ee6f6bcdeb614f6e08fa139a71d117b28107ba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 21:51:37.782773 containerd[1437]: time="2024-08-05T21:51:37.782723140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfeb6db6f147ef2b5fdc51e396fc503fa6d56970fb9ad6603c11dcce6c750b60\"" Aug 5 21:51:37.783397 kubelet[2133]: E0805 21:51:37.783383 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:37.785334 containerd[1437]: time="2024-08-05T21:51:37.785309727Z" level=info msg="CreateContainer within sandbox \"bfeb6db6f147ef2b5fdc51e396fc503fa6d56970fb9ad6603c11dcce6c750b60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 21:51:37.791433 containerd[1437]: time="2024-08-05T21:51:37.791394855Z" level=info msg="CreateContainer within sandbox \"c943a6a0e0fd300f55ee011fc1ee6f6bcdeb614f6e08fa139a71d117b28107ba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a3fc2d919876c501046e29681c80a4e9d214ed9c3b425508f284a1a9a80db457\"" Aug 5 21:51:37.792178 containerd[1437]: time="2024-08-05T21:51:37.792149058Z" level=info msg="StartContainer for \"a3fc2d919876c501046e29681c80a4e9d214ed9c3b425508f284a1a9a80db457\"" Aug 5 21:51:37.793418 containerd[1437]: time="2024-08-05T21:51:37.793339764Z" level=info msg="CreateContainer within sandbox \"7da1b7c11694438adad3f44cea62f586310dafca6d8e564eccdbab7f91c20705\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d8053149778553eda3de5c9916137e9c5b9b388d1634df28366eb68e3fdddb9d\"" Aug 5 21:51:37.793795 containerd[1437]: time="2024-08-05T21:51:37.793716445Z" level=info msg="StartContainer for \"d8053149778553eda3de5c9916137e9c5b9b388d1634df28366eb68e3fdddb9d\"" Aug 5 21:51:37.799947 containerd[1437]: time="2024-08-05T21:51:37.799907699Z" level=info msg="CreateContainer within sandbox \"bfeb6db6f147ef2b5fdc51e396fc503fa6d56970fb9ad6603c11dcce6c750b60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1aa50f5f6c147e2fc908b54fa4ab0015478807111cf6329b2c89caa7314ce3b\"" Aug 5 21:51:37.800758 containerd[1437]: time="2024-08-05T21:51:37.800700970Z" level=info msg="StartContainer for \"d1aa50f5f6c147e2fc908b54fa4ab0015478807111cf6329b2c89caa7314ce3b\"" Aug 5 21:51:37.815904 systemd[1]: Started cri-containerd-a3fc2d919876c501046e29681c80a4e9d214ed9c3b425508f284a1a9a80db457.scope - libcontainer container a3fc2d919876c501046e29681c80a4e9d214ed9c3b425508f284a1a9a80db457. Aug 5 21:51:37.828928 systemd[1]: Started cri-containerd-d8053149778553eda3de5c9916137e9c5b9b388d1634df28366eb68e3fdddb9d.scope - libcontainer container d8053149778553eda3de5c9916137e9c5b9b388d1634df28366eb68e3fdddb9d. Aug 5 21:51:37.832726 systemd[1]: Started cri-containerd-d1aa50f5f6c147e2fc908b54fa4ab0015478807111cf6329b2c89caa7314ce3b.scope - libcontainer container d1aa50f5f6c147e2fc908b54fa4ab0015478807111cf6329b2c89caa7314ce3b. Aug 5 21:51:37.877363 containerd[1437]: time="2024-08-05T21:51:37.877301735Z" level=info msg="StartContainer for \"a3fc2d919876c501046e29681c80a4e9d214ed9c3b425508f284a1a9a80db457\" returns successfully" Aug 5 21:51:37.877487 containerd[1437]: time="2024-08-05T21:51:37.877415859Z" level=info msg="StartContainer for \"d8053149778553eda3de5c9916137e9c5b9b388d1634df28366eb68e3fdddb9d\" returns successfully" Aug 5 21:51:37.889079 containerd[1437]: time="2024-08-05T21:51:37.888777848Z" level=info msg="StartContainer for \"d1aa50f5f6c147e2fc908b54fa4ab0015478807111cf6329b2c89caa7314ce3b\" returns successfully" Aug 5 21:51:37.894729 kubelet[2133]: W0805 21:51:37.894626 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:37.894729 kubelet[2133]: E0805 21:51:37.894692 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:38.015057 kubelet[2133]: W0805 21:51:38.014946 2133 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:38.015057 kubelet[2133]: E0805 21:51:38.015012 2133 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Aug 5 21:51:38.043534 kubelet[2133]: E0805 21:51:38.042767 2133 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Aug 5 21:51:38.148474 kubelet[2133]: I0805 21:51:38.148320 2133 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:51:38.670663 kubelet[2133]: E0805 21:51:38.670527 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:38.674946 kubelet[2133]: E0805 21:51:38.673951 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:38.678516 kubelet[2133]: E0805 21:51:38.678497 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:39.536850 kubelet[2133]: I0805 21:51:39.536801 2133 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 21:51:39.548760 kubelet[2133]: E0805 21:51:39.548635 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:39.648892 kubelet[2133]: E0805 21:51:39.648828 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:39.677737 kubelet[2133]: E0805 21:51:39.677671 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:39.749724 kubelet[2133]: E0805 21:51:39.749688 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:39.850624 kubelet[2133]: E0805 21:51:39.850501 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:39.951137 kubelet[2133]: E0805 21:51:39.951098 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:40.051693 kubelet[2133]: E0805 21:51:40.051654 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:40.152504 kubelet[2133]: E0805 21:51:40.152402 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:40.253817 kubelet[2133]: E0805 21:51:40.253771 2133 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:40.592769 kubelet[2133]: E0805 21:51:40.592434 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:40.632203 kubelet[2133]: I0805 21:51:40.631950 2133 apiserver.go:52] "Watching apiserver" Aug 5 21:51:40.638733 kubelet[2133]: I0805 21:51:40.638709 2133 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:51:40.678586 kubelet[2133]: E0805 21:51:40.678428 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:42.323241 systemd[1]: Reloading requested from client PID 2411 ('systemctl') (unit session-7.scope)... Aug 5 21:51:42.323257 systemd[1]: Reloading... Aug 5 21:51:42.383901 zram_generator::config[2448]: No configuration found. Aug 5 21:51:42.546069 kubelet[2133]: E0805 21:51:42.546036 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:42.664874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:51:42.681893 kubelet[2133]: E0805 21:51:42.681856 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:42.732459 systemd[1]: Reloading finished in 408 ms. Aug 5 21:51:42.774512 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:42.791886 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:51:42.792178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:42.792311 systemd[1]: kubelet.service: Consumed 1.755s CPU time, 118.9M memory peak, 0B memory swap peak. Aug 5 21:51:42.802463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:42.911861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:42.915813 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:51:42.966462 kubelet[2490]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:51:42.966462 kubelet[2490]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:51:42.966462 kubelet[2490]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:51:42.966462 kubelet[2490]: I0805 21:51:42.965939 2490 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:51:42.973824 kubelet[2490]: I0805 21:51:42.973782 2490 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:51:42.973824 kubelet[2490]: I0805 21:51:42.973815 2490 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:51:42.974097 kubelet[2490]: I0805 21:51:42.974069 2490 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:51:42.976074 kubelet[2490]: I0805 21:51:42.976032 2490 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 21:51:42.977309 kubelet[2490]: I0805 21:51:42.977278 2490 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:51:42.991031 kubelet[2490]: W0805 21:51:42.989484 2490 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:51:42.991031 kubelet[2490]: I0805 21:51:42.990206 2490 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:51:42.991031 kubelet[2490]: I0805 21:51:42.990379 2490 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:51:42.991031 kubelet[2490]: I0805 21:51:42.990568 2490 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:51:42.991031 kubelet[2490]: I0805 21:51:42.990590 2490 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:51:42.991031 kubelet[2490]: I0805 21:51:42.990599 2490 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:51:42.991294 kubelet[2490]: I0805 21:51:42.990634 2490 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:51:42.991294 kubelet[2490]: I0805 21:51:42.990705 2490 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:51:42.991294 kubelet[2490]: I0805 21:51:42.990717 2490 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:51:42.991770 kubelet[2490]: I0805 21:51:42.991467 2490 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:51:42.991770 kubelet[2490]: I0805 21:51:42.991577 2490 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:51:42.995920 kubelet[2490]: I0805 21:51:42.995896 2490 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:51:42.997067 sudo[2505]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 5 21:51:42.997652 sudo[2505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 5 21:51:42.997888 kubelet[2490]: I0805 21:51:42.997792 2490 server.go:1232] "Started kubelet" Aug 5 21:51:43.000766 kubelet[2490]: E0805 21:51:43.000668 2490 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:51:43.000766 kubelet[2490]: E0805 21:51:43.000705 2490 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:51:43.001231 kubelet[2490]: I0805 21:51:43.001218 2490 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:51:43.001927 kubelet[2490]: I0805 21:51:43.001502 2490 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:51:43.001927 kubelet[2490]: I0805 21:51:43.001556 2490 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:51:43.002026 kubelet[2490]: I0805 21:51:43.001964 2490 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:51:43.002636 kubelet[2490]: I0805 21:51:43.002615 2490 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:51:43.009599 kubelet[2490]: I0805 21:51:43.009570 2490 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:51:43.013698 kubelet[2490]: I0805 21:51:43.013660 2490 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:51:43.013837 kubelet[2490]: I0805 21:51:43.013819 2490 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:51:43.022145 kubelet[2490]: I0805 21:51:43.022069 2490 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:51:43.027524 kubelet[2490]: I0805 21:51:43.027482 2490 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:51:43.027717 kubelet[2490]: I0805 21:51:43.027704 2490 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:51:43.027799 kubelet[2490]: I0805 21:51:43.027789 2490 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:51:43.027927 kubelet[2490]: E0805 21:51:43.027913 2490 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:51:43.068300 kubelet[2490]: I0805 21:51:43.068271 2490 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:51:43.068300 kubelet[2490]: I0805 21:51:43.068296 2490 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:51:43.068434 kubelet[2490]: I0805 21:51:43.068314 2490 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:51:43.068470 kubelet[2490]: I0805 21:51:43.068458 2490 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 21:51:43.068502 kubelet[2490]: I0805 21:51:43.068480 2490 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 21:51:43.068502 kubelet[2490]: I0805 21:51:43.068487 2490 policy_none.go:49] "None policy: Start" Aug 5 21:51:43.069100 kubelet[2490]: I0805 21:51:43.069082 2490 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:51:43.069158 kubelet[2490]: I0805 21:51:43.069110 2490 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:51:43.069265 kubelet[2490]: I0805 21:51:43.069251 2490 state_mem.go:75] "Updated machine memory state" Aug 5 21:51:43.073105 kubelet[2490]: I0805 21:51:43.073073 2490 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:51:43.073364 kubelet[2490]: I0805 21:51:43.073293 2490 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:51:43.114118 kubelet[2490]: I0805 21:51:43.114086 2490 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:51:43.121392 kubelet[2490]: I0805 21:51:43.121359 2490 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Aug 5 21:51:43.121503 kubelet[2490]: I0805 21:51:43.121443 2490 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 21:51:43.128185 kubelet[2490]: I0805 21:51:43.128137 2490 topology_manager.go:215] "Topology Admit Handler" podUID="4c5fc40a88d7a6c786b49127e3a872db" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 21:51:43.128889 kubelet[2490]: I0805 21:51:43.128246 2490 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 21:51:43.128889 kubelet[2490]: I0805 21:51:43.128295 2490 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 21:51:43.133502 kubelet[2490]: E0805 21:51:43.133470 2490 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:43.134259 kubelet[2490]: E0805 21:51:43.134193 2490 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:43.314657 kubelet[2490]: I0805 21:51:43.314555 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:43.314657 kubelet[2490]: I0805 21:51:43.314601 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:43.314657 kubelet[2490]: I0805 21:51:43.314621 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 21:51:43.315955 kubelet[2490]: I0805 21:51:43.315928 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c5fc40a88d7a6c786b49127e3a872db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c5fc40a88d7a6c786b49127e3a872db\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:43.316025 kubelet[2490]: I0805 21:51:43.315984 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c5fc40a88d7a6c786b49127e3a872db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c5fc40a88d7a6c786b49127e3a872db\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:43.316025 kubelet[2490]: I0805 21:51:43.316009 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:43.316082 kubelet[2490]: I0805 21:51:43.316029 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:43.316082 kubelet[2490]: I0805 21:51:43.316057 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:43.316131 kubelet[2490]: I0805 21:51:43.316084 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c5fc40a88d7a6c786b49127e3a872db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c5fc40a88d7a6c786b49127e3a872db\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:43.436090 kubelet[2490]: E0805 21:51:43.434803 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:43.436090 kubelet[2490]: E0805 21:51:43.434808 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:43.436090 kubelet[2490]: E0805 21:51:43.434809 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:43.464627 sudo[2505]: pam_unix(sudo:session): session closed for user root Aug 5 21:51:43.992003 kubelet[2490]: I0805 21:51:43.991724 2490 apiserver.go:52] "Watching apiserver" Aug 5 21:51:44.014437 kubelet[2490]: I0805 21:51:44.014382 2490 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:51:44.040893 kubelet[2490]: E0805 21:51:44.040683 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:44.040893 kubelet[2490]: E0805 21:51:44.040824 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:44.041281 kubelet[2490]: E0805 21:51:44.041213 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:44.068437 kubelet[2490]: I0805 21:51:44.068067 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.067990279 podCreationTimestamp="2024-08-05 21:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:51:44.067881803 +0000 UTC m=+1.145803858" watchObservedRunningTime="2024-08-05 21:51:44.067990279 +0000 UTC m=+1.145912254" Aug 5 21:51:44.068437 kubelet[2490]: I0805 21:51:44.068174 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.068157512 podCreationTimestamp="2024-08-05 21:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:51:44.061058246 +0000 UTC m=+1.138980221" watchObservedRunningTime="2024-08-05 21:51:44.068157512 +0000 UTC m=+1.146079487" Aug 5 21:51:44.079137 kubelet[2490]: I0805 21:51:44.078977 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.078879587 podCreationTimestamp="2024-08-05 21:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:51:44.07881931 +0000 UTC m=+1.156741285" watchObservedRunningTime="2024-08-05 21:51:44.078879587 +0000 UTC m=+1.156801562" Aug 5 21:51:44.913511 sudo[1617]: pam_unix(sudo:session): session closed for user root Aug 5 21:51:44.915604 sshd[1614]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:44.919139 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:55594.service: Deactivated successfully. Aug 5 21:51:44.921122 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 21:51:44.922068 systemd[1]: session-7.scope: Consumed 6.087s CPU time, 136.4M memory peak, 0B memory swap peak. Aug 5 21:51:44.923687 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Aug 5 21:51:44.925060 systemd-logind[1419]: Removed session 7. Aug 5 21:51:45.040941 kubelet[2490]: E0805 21:51:45.040915 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:46.530671 kubelet[2490]: E0805 21:51:46.530623 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:47.068987 kubelet[2490]: E0805 21:51:47.068956 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:48.044560 kubelet[2490]: E0805 21:51:48.044524 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:52.051243 kubelet[2490]: E0805 21:51:52.050930 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:53.052799 kubelet[2490]: E0805 21:51:53.052721 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:53.107368 update_engine[1423]: I0805 21:51:53.106775 1423 update_attempter.cc:509] Updating boot flags... Aug 5 21:51:53.141234 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2573) Aug 5 21:51:53.175015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2574) Aug 5 21:51:53.954459 kubelet[2490]: I0805 21:51:53.954431 2490 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 21:51:53.954786 containerd[1437]: time="2024-08-05T21:51:53.954755987Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 21:51:53.955061 kubelet[2490]: I0805 21:51:53.954912 2490 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 21:51:54.751550 kubelet[2490]: I0805 21:51:54.751454 2490 topology_manager.go:215] "Topology Admit Handler" podUID="1bc6bc23-4051-4f91-859c-98b687c1a0cd" podNamespace="kube-system" podName="kube-proxy-zxknt" Aug 5 21:51:54.766439 kubelet[2490]: I0805 21:51:54.765028 2490 topology_manager.go:215] "Topology Admit Handler" podUID="de02f5f4-124f-4eb0-831e-2a80f52dd188" podNamespace="kube-system" podName="cilium-t26gx" Aug 5 21:51:54.770661 systemd[1]: Created slice kubepods-besteffort-pod1bc6bc23_4051_4f91_859c_98b687c1a0cd.slice - libcontainer container kubepods-besteffort-pod1bc6bc23_4051_4f91_859c_98b687c1a0cd.slice. Aug 5 21:51:54.786333 systemd[1]: Created slice kubepods-burstable-podde02f5f4_124f_4eb0_831e_2a80f52dd188.slice - libcontainer container kubepods-burstable-podde02f5f4_124f_4eb0_831e_2a80f52dd188.slice. Aug 5 21:51:54.797478 kubelet[2490]: I0805 21:51:54.797377 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bc6bc23-4051-4f91-859c-98b687c1a0cd-lib-modules\") pod \"kube-proxy-zxknt\" (UID: \"1bc6bc23-4051-4f91-859c-98b687c1a0cd\") " pod="kube-system/kube-proxy-zxknt" Aug 5 21:51:54.797478 kubelet[2490]: I0805 21:51:54.797436 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-etc-cni-netd\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798047 kubelet[2490]: I0805 21:51:54.797793 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-xtables-lock\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798047 kubelet[2490]: I0805 21:51:54.797842 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1bc6bc23-4051-4f91-859c-98b687c1a0cd-kube-proxy\") pod \"kube-proxy-zxknt\" (UID: \"1bc6bc23-4051-4f91-859c-98b687c1a0cd\") " pod="kube-system/kube-proxy-zxknt" Aug 5 21:51:54.798047 kubelet[2490]: I0805 21:51:54.797894 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-bpf-maps\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798047 kubelet[2490]: I0805 21:51:54.797915 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bc6bc23-4051-4f91-859c-98b687c1a0cd-xtables-lock\") pod \"kube-proxy-zxknt\" (UID: \"1bc6bc23-4051-4f91-859c-98b687c1a0cd\") " pod="kube-system/kube-proxy-zxknt" Aug 5 21:51:54.798047 kubelet[2490]: I0805 21:51:54.797939 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-host-proc-sys-net\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798047 kubelet[2490]: I0805 21:51:54.797963 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de02f5f4-124f-4eb0-831e-2a80f52dd188-hubble-tls\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798334 kubelet[2490]: I0805 21:51:54.797985 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cni-path\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798334 kubelet[2490]: I0805 21:51:54.798009 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-config-path\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798334 kubelet[2490]: I0805 21:51:54.798056 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282d7\" (UniqueName: \"kubernetes.io/projected/1bc6bc23-4051-4f91-859c-98b687c1a0cd-kube-api-access-282d7\") pod \"kube-proxy-zxknt\" (UID: \"1bc6bc23-4051-4f91-859c-98b687c1a0cd\") " pod="kube-system/kube-proxy-zxknt" Aug 5 21:51:54.798334 kubelet[2490]: I0805 21:51:54.798088 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-lib-modules\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798334 kubelet[2490]: I0805 21:51:54.798110 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkn9n\" (UniqueName: \"kubernetes.io/projected/de02f5f4-124f-4eb0-831e-2a80f52dd188-kube-api-access-vkn9n\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798513 kubelet[2490]: I0805 21:51:54.798149 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de02f5f4-124f-4eb0-831e-2a80f52dd188-clustermesh-secrets\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798513 kubelet[2490]: I0805 21:51:54.798173 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-host-proc-sys-kernel\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798513 kubelet[2490]: I0805 21:51:54.798195 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-run\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798513 kubelet[2490]: I0805 21:51:54.798234 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-cgroup\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.798513 kubelet[2490]: I0805 21:51:54.798258 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-hostproc\") pod \"cilium-t26gx\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " pod="kube-system/cilium-t26gx" Aug 5 21:51:54.966178 kubelet[2490]: I0805 21:51:54.965619 2490 topology_manager.go:215] "Topology Admit Handler" podUID="4043cf91-2021-417a-9930-5945d81111e6" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-xbmw5" Aug 5 21:51:54.974323 systemd[1]: Created slice kubepods-besteffort-pod4043cf91_2021_417a_9930_5945d81111e6.slice - libcontainer container kubepods-besteffort-pod4043cf91_2021_417a_9930_5945d81111e6.slice. Aug 5 21:51:55.000012 kubelet[2490]: I0805 21:51:54.999974 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2lq\" (UniqueName: \"kubernetes.io/projected/4043cf91-2021-417a-9930-5945d81111e6-kube-api-access-dq2lq\") pod \"cilium-operator-6bc8ccdb58-xbmw5\" (UID: \"4043cf91-2021-417a-9930-5945d81111e6\") " pod="kube-system/cilium-operator-6bc8ccdb58-xbmw5" Aug 5 21:51:55.000012 kubelet[2490]: I0805 21:51:55.000023 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4043cf91-2021-417a-9930-5945d81111e6-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-xbmw5\" (UID: \"4043cf91-2021-417a-9930-5945d81111e6\") " pod="kube-system/cilium-operator-6bc8ccdb58-xbmw5" Aug 5 21:51:55.083919 kubelet[2490]: E0805 21:51:55.083363 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:55.084945 containerd[1437]: time="2024-08-05T21:51:55.084186455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxknt,Uid:1bc6bc23-4051-4f91-859c-98b687c1a0cd,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:55.090499 kubelet[2490]: E0805 21:51:55.090474 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:55.092047 containerd[1437]: time="2024-08-05T21:51:55.091916196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t26gx,Uid:de02f5f4-124f-4eb0-831e-2a80f52dd188,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:55.108871 containerd[1437]: time="2024-08-05T21:51:55.107511194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:55.108871 containerd[1437]: time="2024-08-05T21:51:55.107569633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:55.108871 containerd[1437]: time="2024-08-05T21:51:55.107588353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:55.108871 containerd[1437]: time="2024-08-05T21:51:55.107601832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:55.116154 containerd[1437]: time="2024-08-05T21:51:55.116068476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:55.116154 containerd[1437]: time="2024-08-05T21:51:55.116122355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:55.116315 containerd[1437]: time="2024-08-05T21:51:55.116149434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:55.116315 containerd[1437]: time="2024-08-05T21:51:55.116164594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:55.131963 systemd[1]: Started cri-containerd-bfa4746ef0d07cd16c950f50625b4c879f379936660342234d66e7a5ac71a18e.scope - libcontainer container bfa4746ef0d07cd16c950f50625b4c879f379936660342234d66e7a5ac71a18e. Aug 5 21:51:55.134482 systemd[1]: Started cri-containerd-d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41.scope - libcontainer container d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41. Aug 5 21:51:55.154958 containerd[1437]: time="2024-08-05T21:51:55.154876777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxknt,Uid:1bc6bc23-4051-4f91-859c-98b687c1a0cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa4746ef0d07cd16c950f50625b4c879f379936660342234d66e7a5ac71a18e\"" Aug 5 21:51:55.155580 kubelet[2490]: E0805 21:51:55.155555 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:55.157762 containerd[1437]: time="2024-08-05T21:51:55.157714832Z" level=info msg="CreateContainer within sandbox \"bfa4746ef0d07cd16c950f50625b4c879f379936660342234d66e7a5ac71a18e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 21:51:55.163523 containerd[1437]: time="2024-08-05T21:51:55.162544440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t26gx,Uid:de02f5f4-124f-4eb0-831e-2a80f52dd188,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\"" Aug 5 21:51:55.164472 kubelet[2490]: E0805 21:51:55.164448 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:55.166242 containerd[1437]: time="2024-08-05T21:51:55.166195115Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 5 21:51:55.178750 containerd[1437]: time="2024-08-05T21:51:55.178679666Z" level=info msg="CreateContainer within sandbox \"bfa4746ef0d07cd16c950f50625b4c879f379936660342234d66e7a5ac71a18e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ab0212c21a3ad9cd9497d2882c300ee6da3b3645bd33ecfcc636f2957d4bd4a\"" Aug 5 21:51:55.180222 containerd[1437]: time="2024-08-05T21:51:55.180175111Z" level=info msg="StartContainer for \"8ab0212c21a3ad9cd9497d2882c300ee6da3b3645bd33ecfcc636f2957d4bd4a\"" Aug 5 21:51:55.211931 systemd[1]: Started cri-containerd-8ab0212c21a3ad9cd9497d2882c300ee6da3b3645bd33ecfcc636f2957d4bd4a.scope - libcontainer container 8ab0212c21a3ad9cd9497d2882c300ee6da3b3645bd33ecfcc636f2957d4bd4a. Aug 5 21:51:55.245722 containerd[1437]: time="2024-08-05T21:51:55.245119887Z" level=info msg="StartContainer for \"8ab0212c21a3ad9cd9497d2882c300ee6da3b3645bd33ecfcc636f2957d4bd4a\" returns successfully" Aug 5 21:51:55.281277 kubelet[2490]: E0805 21:51:55.281165 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:55.282875 containerd[1437]: time="2024-08-05T21:51:55.282826094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-xbmw5,Uid:4043cf91-2021-417a-9930-5945d81111e6,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:55.308292 containerd[1437]: time="2024-08-05T21:51:55.304782865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:55.308292 containerd[1437]: time="2024-08-05T21:51:55.304849744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:55.308292 containerd[1437]: time="2024-08-05T21:51:55.305482889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:55.308292 containerd[1437]: time="2024-08-05T21:51:55.305520848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:55.328206 systemd[1]: Started cri-containerd-f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1.scope - libcontainer container f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1. Aug 5 21:51:55.375547 containerd[1437]: time="2024-08-05T21:51:55.375510307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-xbmw5,Uid:4043cf91-2021-417a-9930-5945d81111e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1\"" Aug 5 21:51:55.382934 kubelet[2490]: E0805 21:51:55.382886 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:56.061231 kubelet[2490]: E0805 21:51:56.061191 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:56.078969 kubelet[2490]: I0805 21:51:56.078927 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zxknt" podStartSLOduration=2.078888862 podCreationTimestamp="2024-08-05 21:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:51:56.078011881 +0000 UTC m=+13.155933856" watchObservedRunningTime="2024-08-05 21:51:56.078888862 +0000 UTC m=+13.156810837" Aug 5 21:51:56.541416 kubelet[2490]: E0805 21:51:56.541387 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:57.070969 kubelet[2490]: E0805 21:51:57.070572 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:00.241068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3252674138.mount: Deactivated successfully. Aug 5 21:52:01.626057 containerd[1437]: time="2024-08-05T21:52:01.625997921Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:01.626783 containerd[1437]: time="2024-08-05T21:52:01.626751268Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651510" Aug 5 21:52:01.627440 containerd[1437]: time="2024-08-05T21:52:01.627404536Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:01.629136 containerd[1437]: time="2024-08-05T21:52:01.628996148Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.462759395s" Aug 5 21:52:01.629136 containerd[1437]: time="2024-08-05T21:52:01.629042748Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 5 21:52:01.646735 containerd[1437]: time="2024-08-05T21:52:01.646453123Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 5 21:52:01.651908 containerd[1437]: time="2024-08-05T21:52:01.651869949Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 5 21:52:01.666803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842350340.mount: Deactivated successfully. Aug 5 21:52:01.671226 containerd[1437]: time="2024-08-05T21:52:01.671186411Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\"" Aug 5 21:52:01.671716 containerd[1437]: time="2024-08-05T21:52:01.671690202Z" level=info msg="StartContainer for \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\"" Aug 5 21:52:01.701952 systemd[1]: Started cri-containerd-b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab.scope - libcontainer container b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab. Aug 5 21:52:01.745327 containerd[1437]: time="2024-08-05T21:52:01.745275476Z" level=info msg="StartContainer for \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\" returns successfully" Aug 5 21:52:01.769428 systemd[1]: cri-containerd-b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab.scope: Deactivated successfully. Aug 5 21:52:01.918190 containerd[1437]: time="2024-08-05T21:52:01.918054616Z" level=info msg="shim disconnected" id=b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab namespace=k8s.io Aug 5 21:52:01.918190 containerd[1437]: time="2024-08-05T21:52:01.918110135Z" level=warning msg="cleaning up after shim disconnected" id=b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab namespace=k8s.io Aug 5 21:52:01.918190 containerd[1437]: time="2024-08-05T21:52:01.918119135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:02.085183 kubelet[2490]: E0805 21:52:02.085151 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:02.086918 containerd[1437]: time="2024-08-05T21:52:02.086882970Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 5 21:52:02.106654 containerd[1437]: time="2024-08-05T21:52:02.106592760Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\"" Aug 5 21:52:02.108215 containerd[1437]: time="2024-08-05T21:52:02.107118671Z" level=info msg="StartContainer for \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\"" Aug 5 21:52:02.135928 systemd[1]: Started cri-containerd-62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f.scope - libcontainer container 62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f. Aug 5 21:52:02.162197 containerd[1437]: time="2024-08-05T21:52:02.162137951Z" level=info msg="StartContainer for \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\" returns successfully" Aug 5 21:52:02.176685 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:52:02.177073 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:52:02.177150 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:52:02.183087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:52:02.183277 systemd[1]: cri-containerd-62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f.scope: Deactivated successfully. Aug 5 21:52:02.210394 containerd[1437]: time="2024-08-05T21:52:02.210318065Z" level=info msg="shim disconnected" id=62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f namespace=k8s.io Aug 5 21:52:02.210394 containerd[1437]: time="2024-08-05T21:52:02.210380624Z" level=warning msg="cleaning up after shim disconnected" id=62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f namespace=k8s.io Aug 5 21:52:02.210394 containerd[1437]: time="2024-08-05T21:52:02.210390424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:02.213710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:52:02.664602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab-rootfs.mount: Deactivated successfully. Aug 5 21:52:02.903277 containerd[1437]: time="2024-08-05T21:52:02.903210674Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:02.904605 containerd[1437]: time="2024-08-05T21:52:02.904562532Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138282" Aug 5 21:52:02.905838 containerd[1437]: time="2024-08-05T21:52:02.905794231Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:02.907221 containerd[1437]: time="2024-08-05T21:52:02.906888653Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.26039077s" Aug 5 21:52:02.907221 containerd[1437]: time="2024-08-05T21:52:02.906922252Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 5 21:52:02.908971 containerd[1437]: time="2024-08-05T21:52:02.908927659Z" level=info msg="CreateContainer within sandbox \"f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 5 21:52:02.928349 containerd[1437]: time="2024-08-05T21:52:02.928224856Z" level=info msg="CreateContainer within sandbox \"f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\"" Aug 5 21:52:02.928473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539246135.mount: Deactivated successfully. Aug 5 21:52:02.929528 containerd[1437]: time="2024-08-05T21:52:02.929481515Z" level=info msg="StartContainer for \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\"" Aug 5 21:52:02.962959 systemd[1]: Started cri-containerd-cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5.scope - libcontainer container cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5. Aug 5 21:52:03.002321 containerd[1437]: time="2024-08-05T21:52:03.002276498Z" level=info msg="StartContainer for \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\" returns successfully" Aug 5 21:52:03.088915 kubelet[2490]: E0805 21:52:03.088508 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:03.094339 kubelet[2490]: E0805 21:52:03.089484 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:03.102264 containerd[1437]: time="2024-08-05T21:52:03.102202577Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 5 21:52:03.113758 kubelet[2490]: I0805 21:52:03.113698 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-xbmw5" podStartSLOduration=1.589914429 podCreationTimestamp="2024-08-05 21:51:54 +0000 UTC" firstStartedPulling="2024-08-05 21:51:55.383406244 +0000 UTC m=+12.461328219" lastFinishedPulling="2024-08-05 21:52:02.907150928 +0000 UTC m=+19.985072903" observedRunningTime="2024-08-05 21:52:03.113363198 +0000 UTC m=+20.191285213" watchObservedRunningTime="2024-08-05 21:52:03.113659113 +0000 UTC m=+20.191581088" Aug 5 21:52:03.132475 containerd[1437]: time="2024-08-05T21:52:03.132379693Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\"" Aug 5 21:52:03.132968 containerd[1437]: time="2024-08-05T21:52:03.132939244Z" level=info msg="StartContainer for \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\"" Aug 5 21:52:03.167958 systemd[1]: Started cri-containerd-58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454.scope - libcontainer container 58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454. Aug 5 21:52:03.206559 containerd[1437]: time="2024-08-05T21:52:03.206297109Z" level=info msg="StartContainer for \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\" returns successfully" Aug 5 21:52:03.236733 systemd[1]: cri-containerd-58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454.scope: Deactivated successfully. Aug 5 21:52:03.346395 containerd[1437]: time="2024-08-05T21:52:03.346316305Z" level=info msg="shim disconnected" id=58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454 namespace=k8s.io Aug 5 21:52:03.346395 containerd[1437]: time="2024-08-05T21:52:03.346386664Z" level=warning msg="cleaning up after shim disconnected" id=58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454 namespace=k8s.io Aug 5 21:52:03.346395 containerd[1437]: time="2024-08-05T21:52:03.346397104Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:04.092346 kubelet[2490]: E0805 21:52:04.092289 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:04.093087 kubelet[2490]: E0805 21:52:04.092803 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:04.094902 containerd[1437]: time="2024-08-05T21:52:04.094864133Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 5 21:52:04.111057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273392967.mount: Deactivated successfully. Aug 5 21:52:04.112124 containerd[1437]: time="2024-08-05T21:52:04.112082909Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\"" Aug 5 21:52:04.112777 containerd[1437]: time="2024-08-05T21:52:04.112737139Z" level=info msg="StartContainer for \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\"" Aug 5 21:52:04.137889 systemd[1]: Started cri-containerd-a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79.scope - libcontainer container a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79. Aug 5 21:52:04.156071 systemd[1]: cri-containerd-a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79.scope: Deactivated successfully. Aug 5 21:52:04.157294 containerd[1437]: time="2024-08-05T21:52:04.157020258Z" level=info msg="StartContainer for \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\" returns successfully" Aug 5 21:52:04.181204 containerd[1437]: time="2024-08-05T21:52:04.181138568Z" level=info msg="shim disconnected" id=a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79 namespace=k8s.io Aug 5 21:52:04.181204 containerd[1437]: time="2024-08-05T21:52:04.181199087Z" level=warning msg="cleaning up after shim disconnected" id=a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79 namespace=k8s.io Aug 5 21:52:04.181204 containerd[1437]: time="2024-08-05T21:52:04.181208767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:04.665991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79-rootfs.mount: Deactivated successfully. Aug 5 21:52:05.098504 kubelet[2490]: E0805 21:52:05.098122 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:05.101665 containerd[1437]: time="2024-08-05T21:52:05.101623008Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 5 21:52:05.114074 containerd[1437]: time="2024-08-05T21:52:05.113852268Z" level=info msg="CreateContainer within sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\"" Aug 5 21:52:05.114585 containerd[1437]: time="2024-08-05T21:52:05.114514578Z" level=info msg="StartContainer for \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\"" Aug 5 21:52:05.142965 systemd[1]: Started cri-containerd-7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2.scope - libcontainer container 7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2. Aug 5 21:52:05.167828 containerd[1437]: time="2024-08-05T21:52:05.167782633Z" level=info msg="StartContainer for \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\" returns successfully" Aug 5 21:52:05.258486 kubelet[2490]: I0805 21:52:05.257587 2490 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 21:52:05.275921 kubelet[2490]: I0805 21:52:05.275756 2490 topology_manager.go:215] "Topology Admit Handler" podUID="c8e6d579-1f83-45ad-a8a6-ed559fe54c38" podNamespace="kube-system" podName="coredns-5dd5756b68-snqpx" Aug 5 21:52:05.280381 kubelet[2490]: I0805 21:52:05.278336 2490 topology_manager.go:215] "Topology Admit Handler" podUID="b9de07ba-2569-4fcc-a073-086c36be5933" podNamespace="kube-system" podName="coredns-5dd5756b68-xzs9f" Aug 5 21:52:05.287162 systemd[1]: Created slice kubepods-burstable-podc8e6d579_1f83_45ad_a8a6_ed559fe54c38.slice - libcontainer container kubepods-burstable-podc8e6d579_1f83_45ad_a8a6_ed559fe54c38.slice. Aug 5 21:52:05.297006 systemd[1]: Created slice kubepods-burstable-podb9de07ba_2569_4fcc_a073_086c36be5933.slice - libcontainer container kubepods-burstable-podb9de07ba_2569_4fcc_a073_086c36be5933.slice. Aug 5 21:52:05.376775 kubelet[2490]: I0805 21:52:05.376719 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kwwm\" (UniqueName: \"kubernetes.io/projected/b9de07ba-2569-4fcc-a073-086c36be5933-kube-api-access-8kwwm\") pod \"coredns-5dd5756b68-xzs9f\" (UID: \"b9de07ba-2569-4fcc-a073-086c36be5933\") " pod="kube-system/coredns-5dd5756b68-xzs9f" Aug 5 21:52:05.376775 kubelet[2490]: I0805 21:52:05.376787 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9de07ba-2569-4fcc-a073-086c36be5933-config-volume\") pod \"coredns-5dd5756b68-xzs9f\" (UID: \"b9de07ba-2569-4fcc-a073-086c36be5933\") " pod="kube-system/coredns-5dd5756b68-xzs9f" Aug 5 21:52:05.376951 kubelet[2490]: I0805 21:52:05.376812 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8e6d579-1f83-45ad-a8a6-ed559fe54c38-config-volume\") pod \"coredns-5dd5756b68-snqpx\" (UID: \"c8e6d579-1f83-45ad-a8a6-ed559fe54c38\") " pod="kube-system/coredns-5dd5756b68-snqpx" Aug 5 21:52:05.376951 kubelet[2490]: I0805 21:52:05.376831 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sgng\" (UniqueName: \"kubernetes.io/projected/c8e6d579-1f83-45ad-a8a6-ed559fe54c38-kube-api-access-4sgng\") pod \"coredns-5dd5756b68-snqpx\" (UID: \"c8e6d579-1f83-45ad-a8a6-ed559fe54c38\") " pod="kube-system/coredns-5dd5756b68-snqpx" Aug 5 21:52:05.593348 kubelet[2490]: E0805 21:52:05.593274 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:05.594127 containerd[1437]: time="2024-08-05T21:52:05.594081507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-snqpx,Uid:c8e6d579-1f83-45ad-a8a6-ed559fe54c38,Namespace:kube-system,Attempt:0,}" Aug 5 21:52:05.599399 kubelet[2490]: E0805 21:52:05.599373 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:05.600078 containerd[1437]: time="2024-08-05T21:52:05.599810383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xzs9f,Uid:b9de07ba-2569-4fcc-a073-086c36be5933,Namespace:kube-system,Attempt:0,}" Aug 5 21:52:05.673549 systemd[1]: run-containerd-runc-k8s.io-7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2-runc.pgFxtD.mount: Deactivated successfully. Aug 5 21:52:06.100301 kubelet[2490]: E0805 21:52:06.100181 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:06.114551 kubelet[2490]: I0805 21:52:06.114219 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t26gx" podStartSLOduration=5.644867427 podCreationTimestamp="2024-08-05 21:51:54 +0000 UTC" firstStartedPulling="2024-08-05 21:51:55.165755445 +0000 UTC m=+12.243677420" lastFinishedPulling="2024-08-05 21:52:01.635050923 +0000 UTC m=+18.712972898" observedRunningTime="2024-08-05 21:52:06.112473848 +0000 UTC m=+23.190395783" watchObservedRunningTime="2024-08-05 21:52:06.114162905 +0000 UTC m=+23.192084880" Aug 5 21:52:07.102271 kubelet[2490]: E0805 21:52:07.102230 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:07.474075 systemd-networkd[1370]: cilium_host: Link UP Aug 5 21:52:07.474196 systemd-networkd[1370]: cilium_net: Link UP Aug 5 21:52:07.474199 systemd-networkd[1370]: cilium_net: Gained carrier Aug 5 21:52:07.474336 systemd-networkd[1370]: cilium_host: Gained carrier Aug 5 21:52:07.474473 systemd-networkd[1370]: cilium_host: Gained IPv6LL Aug 5 21:52:07.568538 systemd-networkd[1370]: cilium_vxlan: Link UP Aug 5 21:52:07.568723 systemd-networkd[1370]: cilium_vxlan: Gained carrier Aug 5 21:52:07.871792 kernel: NET: Registered PF_ALG protocol family Aug 5 21:52:08.104008 kubelet[2490]: E0805 21:52:08.103799 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:08.201862 systemd-networkd[1370]: cilium_net: Gained IPv6LL Aug 5 21:52:08.494969 systemd-networkd[1370]: lxc_health: Link UP Aug 5 21:52:08.499872 systemd-networkd[1370]: lxc_health: Gained carrier Aug 5 21:52:08.746313 systemd-networkd[1370]: lxc22c7cb264897: Link UP Aug 5 21:52:08.769371 kernel: eth0: renamed from tmp6c30f Aug 5 21:52:08.777321 kernel: eth0: renamed from tmp210fb Aug 5 21:52:08.783557 systemd-networkd[1370]: lxc1d3e8be9c2d9: Link UP Aug 5 21:52:08.785456 systemd-networkd[1370]: lxc22c7cb264897: Gained carrier Aug 5 21:52:08.787030 systemd-networkd[1370]: lxc1d3e8be9c2d9: Gained carrier Aug 5 21:52:08.840976 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Aug 5 21:52:09.106106 kubelet[2490]: E0805 21:52:09.105995 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:09.865014 systemd-networkd[1370]: lxc_health: Gained IPv6LL Aug 5 21:52:10.057918 systemd-networkd[1370]: lxc22c7cb264897: Gained IPv6LL Aug 5 21:52:10.107785 kubelet[2490]: E0805 21:52:10.107758 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:10.633867 systemd-networkd[1370]: lxc1d3e8be9c2d9: Gained IPv6LL Aug 5 21:52:11.110081 kubelet[2490]: E0805 21:52:11.109755 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:12.393569 containerd[1437]: time="2024-08-05T21:52:12.393474570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:12.393569 containerd[1437]: time="2024-08-05T21:52:12.393531529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:12.393569 containerd[1437]: time="2024-08-05T21:52:12.393555689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:12.393569 containerd[1437]: time="2024-08-05T21:52:12.393570129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:12.394616 containerd[1437]: time="2024-08-05T21:52:12.394399519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:12.394616 containerd[1437]: time="2024-08-05T21:52:12.394446559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:12.394616 containerd[1437]: time="2024-08-05T21:52:12.394472879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:12.394616 containerd[1437]: time="2024-08-05T21:52:12.394488198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:12.407920 systemd[1]: run-containerd-runc-k8s.io-210fbd4cbe33d073239f6178e7ef5842b01c2188b5d70dd849b98df98402af74-runc.yKUDJ8.mount: Deactivated successfully. Aug 5 21:52:12.416915 systemd[1]: Started cri-containerd-210fbd4cbe33d073239f6178e7ef5842b01c2188b5d70dd849b98df98402af74.scope - libcontainer container 210fbd4cbe33d073239f6178e7ef5842b01c2188b5d70dd849b98df98402af74. Aug 5 21:52:12.419402 systemd[1]: Started cri-containerd-6c30f0b627920a9e33542db4e478c468e8008175eef7e248ff6b02555000c17d.scope - libcontainer container 6c30f0b627920a9e33542db4e478c468e8008175eef7e248ff6b02555000c17d. Aug 5 21:52:12.429495 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:52:12.431363 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:52:12.453513 containerd[1437]: time="2024-08-05T21:52:12.453470128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xzs9f,Uid:b9de07ba-2569-4fcc-a073-086c36be5933,Namespace:kube-system,Attempt:0,} returns sandbox id \"210fbd4cbe33d073239f6178e7ef5842b01c2188b5d70dd849b98df98402af74\"" Aug 5 21:52:12.454543 kubelet[2490]: E0805 21:52:12.454518 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:12.458105 containerd[1437]: time="2024-08-05T21:52:12.457825118Z" level=info msg="CreateContainer within sandbox \"210fbd4cbe33d073239f6178e7ef5842b01c2188b5d70dd849b98df98402af74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:52:12.462652 containerd[1437]: time="2024-08-05T21:52:12.462577464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-snqpx,Uid:c8e6d579-1f83-45ad-a8a6-ed559fe54c38,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c30f0b627920a9e33542db4e478c468e8008175eef7e248ff6b02555000c17d\"" Aug 5 21:52:12.463236 kubelet[2490]: E0805 21:52:12.463220 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:12.471300 containerd[1437]: time="2024-08-05T21:52:12.471263285Z" level=info msg="CreateContainer within sandbox \"210fbd4cbe33d073239f6178e7ef5842b01c2188b5d70dd849b98df98402af74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2e74fba44f1e2fb4ca6b53133461e408fe29a9ce055107f86cf4314883c4716\"" Aug 5 21:52:12.472513 containerd[1437]: time="2024-08-05T21:52:12.471861679Z" level=info msg="StartContainer for \"f2e74fba44f1e2fb4ca6b53133461e408fe29a9ce055107f86cf4314883c4716\"" Aug 5 21:52:12.480776 containerd[1437]: time="2024-08-05T21:52:12.480510700Z" level=info msg="CreateContainer within sandbox \"6c30f0b627920a9e33542db4e478c468e8008175eef7e248ff6b02555000c17d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:52:12.495654 containerd[1437]: time="2024-08-05T21:52:12.495601929Z" level=info msg="CreateContainer within sandbox \"6c30f0b627920a9e33542db4e478c468e8008175eef7e248ff6b02555000c17d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94078198d2ff708c55546d2c8c97dae0159013822a2cd7b47b4ded3e7ccc8e59\"" Aug 5 21:52:12.497088 containerd[1437]: time="2024-08-05T21:52:12.496648077Z" level=info msg="StartContainer for \"94078198d2ff708c55546d2c8c97dae0159013822a2cd7b47b4ded3e7ccc8e59\"" Aug 5 21:52:12.506947 systemd[1]: Started cri-containerd-f2e74fba44f1e2fb4ca6b53133461e408fe29a9ce055107f86cf4314883c4716.scope - libcontainer container f2e74fba44f1e2fb4ca6b53133461e408fe29a9ce055107f86cf4314883c4716. Aug 5 21:52:12.523914 systemd[1]: Started cri-containerd-94078198d2ff708c55546d2c8c97dae0159013822a2cd7b47b4ded3e7ccc8e59.scope - libcontainer container 94078198d2ff708c55546d2c8c97dae0159013822a2cd7b47b4ded3e7ccc8e59. Aug 5 21:52:12.542783 containerd[1437]: time="2024-08-05T21:52:12.542729753Z" level=info msg="StartContainer for \"f2e74fba44f1e2fb4ca6b53133461e408fe29a9ce055107f86cf4314883c4716\" returns successfully" Aug 5 21:52:12.580975 containerd[1437]: time="2024-08-05T21:52:12.580865239Z" level=info msg="StartContainer for \"94078198d2ff708c55546d2c8c97dae0159013822a2cd7b47b4ded3e7ccc8e59\" returns successfully" Aug 5 21:52:13.135781 kubelet[2490]: E0805 21:52:13.135279 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:13.141143 kubelet[2490]: E0805 21:52:13.141119 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:13.165769 kubelet[2490]: I0805 21:52:13.161962 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-snqpx" podStartSLOduration=19.161923932 podCreationTimestamp="2024-08-05 21:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:52:13.16024755 +0000 UTC m=+30.238169565" watchObservedRunningTime="2024-08-05 21:52:13.161923932 +0000 UTC m=+30.239845907" Aug 5 21:52:13.165769 kubelet[2490]: I0805 21:52:13.162044 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xzs9f" podStartSLOduration=19.162028971 podCreationTimestamp="2024-08-05 21:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:52:13.148117604 +0000 UTC m=+30.226039579" watchObservedRunningTime="2024-08-05 21:52:13.162028971 +0000 UTC m=+30.239950946" Aug 5 21:52:14.142489 kubelet[2490]: E0805 21:52:14.142416 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:14.142838 kubelet[2490]: E0805 21:52:14.142496 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:14.806636 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:52820.service - OpenSSH per-connection server daemon (10.0.0.1:52820). Aug 5 21:52:14.845619 sshd[3888]: Accepted publickey for core from 10.0.0.1 port 52820 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:14.847123 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:14.850438 systemd-logind[1419]: New session 8 of user core. Aug 5 21:52:14.856900 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 21:52:14.983949 sshd[3888]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:14.987194 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:52820.service: Deactivated successfully. Aug 5 21:52:14.989204 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 21:52:14.989900 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Aug 5 21:52:14.990633 systemd-logind[1419]: Removed session 8. Aug 5 21:52:15.151064 kubelet[2490]: E0805 21:52:15.151024 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:15.151429 kubelet[2490]: E0805 21:52:15.151191 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:19.994996 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:52824.service - OpenSSH per-connection server daemon (10.0.0.1:52824). Aug 5 21:52:20.039775 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 52824 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:20.041265 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:20.047289 systemd-logind[1419]: New session 9 of user core. Aug 5 21:52:20.055983 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 21:52:20.171947 sshd[3911]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:20.175765 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:52824.service: Deactivated successfully. Aug 5 21:52:20.177612 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 21:52:20.179496 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Aug 5 21:52:20.180459 systemd-logind[1419]: Removed session 9. Aug 5 21:52:25.185067 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:38366.service - OpenSSH per-connection server daemon (10.0.0.1:38366). Aug 5 21:52:25.224620 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 38366 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:25.226053 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:25.232087 systemd-logind[1419]: New session 10 of user core. Aug 5 21:52:25.238852 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 21:52:25.355054 sshd[3927]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:25.358449 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:38366.service: Deactivated successfully. Aug 5 21:52:25.360382 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 21:52:25.361211 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Aug 5 21:52:25.363365 systemd-logind[1419]: Removed session 10. Aug 5 21:52:30.370229 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:38382.service - OpenSSH per-connection server daemon (10.0.0.1:38382). Aug 5 21:52:30.413641 sshd[3944]: Accepted publickey for core from 10.0.0.1 port 38382 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:30.415271 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:30.419733 systemd-logind[1419]: New session 11 of user core. Aug 5 21:52:30.431138 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 21:52:30.579356 sshd[3944]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:30.586359 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:38382.service: Deactivated successfully. Aug 5 21:52:30.590621 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 21:52:30.592216 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Aug 5 21:52:30.603021 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:38390.service - OpenSSH per-connection server daemon (10.0.0.1:38390). Aug 5 21:52:30.604019 systemd-logind[1419]: Removed session 11. Aug 5 21:52:30.640730 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 38390 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:30.642124 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:30.648721 systemd-logind[1419]: New session 12 of user core. Aug 5 21:52:30.662922 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 21:52:31.415432 sshd[3959]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:31.426491 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:38390.service: Deactivated successfully. Aug 5 21:52:31.431630 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 21:52:31.435210 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Aug 5 21:52:31.441434 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:38392.service - OpenSSH per-connection server daemon (10.0.0.1:38392). Aug 5 21:52:31.442845 systemd-logind[1419]: Removed session 12. Aug 5 21:52:31.478242 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 38392 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:31.479633 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:31.484247 systemd-logind[1419]: New session 13 of user core. Aug 5 21:52:31.498927 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 21:52:31.619069 sshd[3971]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:31.621915 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:38392.service: Deactivated successfully. Aug 5 21:52:31.623731 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 21:52:31.625412 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Aug 5 21:52:31.626417 systemd-logind[1419]: Removed session 13. Aug 5 21:52:36.629520 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:52878.service - OpenSSH per-connection server daemon (10.0.0.1:52878). Aug 5 21:52:36.676720 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 52878 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:36.678076 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:36.682979 systemd-logind[1419]: New session 14 of user core. Aug 5 21:52:36.694918 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 21:52:36.797244 sshd[3986]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:36.809481 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:52878.service: Deactivated successfully. Aug 5 21:52:36.812273 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 21:52:36.814137 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Aug 5 21:52:36.824220 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:52894.service - OpenSSH per-connection server daemon (10.0.0.1:52894). Aug 5 21:52:36.825332 systemd-logind[1419]: Removed session 14. Aug 5 21:52:36.856905 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 52894 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:36.858202 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:36.862406 systemd-logind[1419]: New session 15 of user core. Aug 5 21:52:36.874497 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 21:52:37.091915 sshd[4000]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:37.103406 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:52894.service: Deactivated successfully. Aug 5 21:52:37.106279 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 21:52:37.107764 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Aug 5 21:52:37.117035 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:52908.service - OpenSSH per-connection server daemon (10.0.0.1:52908). Aug 5 21:52:37.117839 systemd-logind[1419]: Removed session 15. Aug 5 21:52:37.157600 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 52908 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:37.158700 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:37.164712 systemd-logind[1419]: New session 16 of user core. Aug 5 21:52:37.173970 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 21:52:37.945123 sshd[4012]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:37.957616 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:52908.service: Deactivated successfully. Aug 5 21:52:37.961376 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 21:52:37.964311 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Aug 5 21:52:37.970938 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:52912.service - OpenSSH per-connection server daemon (10.0.0.1:52912). Aug 5 21:52:37.973941 systemd-logind[1419]: Removed session 16. Aug 5 21:52:38.009756 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 52912 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:38.011127 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:38.015277 systemd-logind[1419]: New session 17 of user core. Aug 5 21:52:38.028920 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 21:52:38.327808 sshd[4033]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:38.343227 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:52912.service: Deactivated successfully. Aug 5 21:52:38.345130 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 21:52:38.346537 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Aug 5 21:52:38.363399 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:52924.service - OpenSSH per-connection server daemon (10.0.0.1:52924). Aug 5 21:52:38.364507 systemd-logind[1419]: Removed session 17. Aug 5 21:52:38.402440 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 52924 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:38.403859 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:38.408380 systemd-logind[1419]: New session 18 of user core. Aug 5 21:52:38.419963 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 21:52:38.537596 sshd[4045]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:38.541130 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:52924.service: Deactivated successfully. Aug 5 21:52:38.543358 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 21:52:38.544099 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Aug 5 21:52:38.545165 systemd-logind[1419]: Removed session 18. Aug 5 21:52:43.550198 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:48826.service - OpenSSH per-connection server daemon (10.0.0.1:48826). Aug 5 21:52:43.595127 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 48826 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:43.595532 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:43.602127 systemd-logind[1419]: New session 19 of user core. Aug 5 21:52:43.610932 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 21:52:43.740111 sshd[4065]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:43.743101 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:48826.service: Deactivated successfully. Aug 5 21:52:43.745732 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 21:52:43.748357 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. Aug 5 21:52:43.749362 systemd-logind[1419]: Removed session 19. Aug 5 21:52:48.759980 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:48840.service - OpenSSH per-connection server daemon (10.0.0.1:48840). Aug 5 21:52:48.797195 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 48840 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:48.798693 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:48.802637 systemd-logind[1419]: New session 20 of user core. Aug 5 21:52:48.807939 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 21:52:48.916470 sshd[4079]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:48.919137 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:48840.service: Deactivated successfully. Aug 5 21:52:48.921944 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. Aug 5 21:52:48.922084 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 21:52:48.923351 systemd-logind[1419]: Removed session 20. Aug 5 21:52:51.029216 kubelet[2490]: E0805 21:52:51.029123 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:53.930302 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:34490.service - OpenSSH per-connection server daemon (10.0.0.1:34490). Aug 5 21:52:53.967574 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 34490 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:53.968841 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:53.972343 systemd-logind[1419]: New session 21 of user core. Aug 5 21:52:53.986918 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 21:52:54.101168 sshd[4094]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:54.111955 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:34490.service: Deactivated successfully. Aug 5 21:52:54.113891 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 21:52:54.115522 systemd-logind[1419]: Session 21 logged out. Waiting for processes to exit. Aug 5 21:52:54.117309 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:34498.service - OpenSSH per-connection server daemon (10.0.0.1:34498). Aug 5 21:52:54.118557 systemd-logind[1419]: Removed session 21. Aug 5 21:52:54.158345 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 34498 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:54.159998 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:54.164828 systemd-logind[1419]: New session 22 of user core. Aug 5 21:52:54.172978 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 21:52:56.256647 containerd[1437]: time="2024-08-05T21:52:56.256405564Z" level=info msg="StopContainer for \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\" with timeout 30 (s)" Aug 5 21:52:56.260443 containerd[1437]: time="2024-08-05T21:52:56.259323280Z" level=info msg="Stop container \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\" with signal terminated" Aug 5 21:52:56.272044 systemd[1]: cri-containerd-cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5.scope: Deactivated successfully. Aug 5 21:52:56.292257 containerd[1437]: time="2024-08-05T21:52:56.292206206Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:52:56.304625 containerd[1437]: time="2024-08-05T21:52:56.304580278Z" level=info msg="StopContainer for \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\" with timeout 2 (s)" Aug 5 21:52:56.304995 containerd[1437]: time="2024-08-05T21:52:56.304959803Z" level=info msg="Stop container \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\" with signal terminated" Aug 5 21:52:56.307737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5-rootfs.mount: Deactivated successfully. Aug 5 21:52:56.312068 systemd-networkd[1370]: lxc_health: Link DOWN Aug 5 21:52:56.312080 systemd-networkd[1370]: lxc_health: Lost carrier Aug 5 21:52:56.321649 containerd[1437]: time="2024-08-05T21:52:56.321415765Z" level=info msg="shim disconnected" id=cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5 namespace=k8s.io Aug 5 21:52:56.321649 containerd[1437]: time="2024-08-05T21:52:56.321476806Z" level=warning msg="cleaning up after shim disconnected" id=cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5 namespace=k8s.io Aug 5 21:52:56.321649 containerd[1437]: time="2024-08-05T21:52:56.321489206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:56.337779 containerd[1437]: time="2024-08-05T21:52:56.336040906Z" level=info msg="StopContainer for \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\" returns successfully" Aug 5 21:52:56.337779 containerd[1437]: time="2024-08-05T21:52:56.336675354Z" level=info msg="StopPodSandbox for \"f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1\"" Aug 5 21:52:56.337779 containerd[1437]: time="2024-08-05T21:52:56.336723154Z" level=info msg="Container to stop \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:52:56.338448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1-shm.mount: Deactivated successfully. Aug 5 21:52:56.344255 systemd[1]: cri-containerd-7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2.scope: Deactivated successfully. Aug 5 21:52:56.344578 systemd[1]: cri-containerd-7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2.scope: Consumed 6.753s CPU time. Aug 5 21:52:56.347156 systemd[1]: cri-containerd-f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1.scope: Deactivated successfully. Aug 5 21:52:56.366007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2-rootfs.mount: Deactivated successfully. Aug 5 21:52:56.374377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1-rootfs.mount: Deactivated successfully. Aug 5 21:52:56.377378 containerd[1437]: time="2024-08-05T21:52:56.377312814Z" level=info msg="shim disconnected" id=7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2 namespace=k8s.io Aug 5 21:52:56.377378 containerd[1437]: time="2024-08-05T21:52:56.377373575Z" level=warning msg="cleaning up after shim disconnected" id=7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2 namespace=k8s.io Aug 5 21:52:56.377378 containerd[1437]: time="2024-08-05T21:52:56.377383135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:56.380871 containerd[1437]: time="2024-08-05T21:52:56.380814017Z" level=info msg="shim disconnected" id=f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1 namespace=k8s.io Aug 5 21:52:56.380871 containerd[1437]: time="2024-08-05T21:52:56.380890298Z" level=warning msg="cleaning up after shim disconnected" id=f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1 namespace=k8s.io Aug 5 21:52:56.381027 containerd[1437]: time="2024-08-05T21:52:56.380899698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:56.393800 containerd[1437]: time="2024-08-05T21:52:56.393587575Z" level=info msg="StopContainer for \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\" returns successfully" Aug 5 21:52:56.394223 containerd[1437]: time="2024-08-05T21:52:56.394126661Z" level=info msg="StopPodSandbox for \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\"" Aug 5 21:52:56.394223 containerd[1437]: time="2024-08-05T21:52:56.394168742Z" level=info msg="Container to stop \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:52:56.394223 containerd[1437]: time="2024-08-05T21:52:56.394204982Z" level=info msg="Container to stop \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:52:56.394223 containerd[1437]: time="2024-08-05T21:52:56.394216943Z" level=info msg="Container to stop \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:52:56.394223 containerd[1437]: time="2024-08-05T21:52:56.394226823Z" level=info msg="Container to stop \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:52:56.394416 containerd[1437]: time="2024-08-05T21:52:56.394237143Z" level=info msg="Container to stop \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:52:56.396314 containerd[1437]: time="2024-08-05T21:52:56.396168167Z" level=info msg="TearDown network for sandbox \"f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1\" successfully" Aug 5 21:52:56.396314 containerd[1437]: time="2024-08-05T21:52:56.396206327Z" level=info msg="StopPodSandbox for \"f7be4973ec9cde822c32b7007c58c1b8d1f9087baa594a4ba61ddfb77780ddd1\" returns successfully" Aug 5 21:52:56.400022 systemd[1]: cri-containerd-d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41.scope: Deactivated successfully. Aug 5 21:52:56.431885 containerd[1437]: time="2024-08-05T21:52:56.431813606Z" level=info msg="shim disconnected" id=d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41 namespace=k8s.io Aug 5 21:52:56.432905 containerd[1437]: time="2024-08-05T21:52:56.432721097Z" level=warning msg="cleaning up after shim disconnected" id=d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41 namespace=k8s.io Aug 5 21:52:56.432905 containerd[1437]: time="2024-08-05T21:52:56.432766698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:56.443913 containerd[1437]: time="2024-08-05T21:52:56.443847354Z" level=info msg="TearDown network for sandbox \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" successfully" Aug 5 21:52:56.444165 containerd[1437]: time="2024-08-05T21:52:56.444018916Z" level=info msg="StopPodSandbox for \"d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41\" returns successfully" Aug 5 21:52:56.493193 kubelet[2490]: I0805 21:52:56.493147 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-run\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493193 kubelet[2490]: I0805 21:52:56.493197 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de02f5f4-124f-4eb0-831e-2a80f52dd188-clustermesh-secrets\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493626 kubelet[2490]: I0805 21:52:56.493221 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-bpf-maps\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493626 kubelet[2490]: I0805 21:52:56.493242 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-host-proc-sys-net\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493626 kubelet[2490]: I0805 21:52:56.493266 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkn9n\" (UniqueName: \"kubernetes.io/projected/de02f5f4-124f-4eb0-831e-2a80f52dd188-kube-api-access-vkn9n\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493626 kubelet[2490]: I0805 21:52:56.493311 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-config-path\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493626 kubelet[2490]: I0805 21:52:56.493330 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-hostproc\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493626 kubelet[2490]: I0805 21:52:56.493349 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-host-proc-sys-kernel\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493809 kubelet[2490]: I0805 21:52:56.493369 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2lq\" (UniqueName: \"kubernetes.io/projected/4043cf91-2021-417a-9930-5945d81111e6-kube-api-access-dq2lq\") pod \"4043cf91-2021-417a-9930-5945d81111e6\" (UID: \"4043cf91-2021-417a-9930-5945d81111e6\") " Aug 5 21:52:56.493809 kubelet[2490]: I0805 21:52:56.493387 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-etc-cni-netd\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493809 kubelet[2490]: I0805 21:52:56.493405 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cni-path\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493809 kubelet[2490]: I0805 21:52:56.493425 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de02f5f4-124f-4eb0-831e-2a80f52dd188-hubble-tls\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493809 kubelet[2490]: I0805 21:52:56.493442 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-lib-modules\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493809 kubelet[2490]: I0805 21:52:56.493463 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4043cf91-2021-417a-9930-5945d81111e6-cilium-config-path\") pod \"4043cf91-2021-417a-9930-5945d81111e6\" (UID: \"4043cf91-2021-417a-9930-5945d81111e6\") " Aug 5 21:52:56.493960 kubelet[2490]: I0805 21:52:56.493481 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-xtables-lock\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493960 kubelet[2490]: I0805 21:52:56.493499 2490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-cgroup\") pod \"de02f5f4-124f-4eb0-831e-2a80f52dd188\" (UID: \"de02f5f4-124f-4eb0-831e-2a80f52dd188\") " Aug 5 21:52:56.493960 kubelet[2490]: I0805 21:52:56.493570 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.493960 kubelet[2490]: I0805 21:52:56.493612 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.494065 kubelet[2490]: I0805 21:52:56.494013 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.494065 kubelet[2490]: I0805 21:52:56.494061 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-hostproc" (OuterVolumeSpecName: "hostproc") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.495099 kubelet[2490]: I0805 21:52:56.495057 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.495099 kubelet[2490]: I0805 21:52:56.495075 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.495197 kubelet[2490]: I0805 21:52:56.495111 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.495197 kubelet[2490]: I0805 21:52:56.495129 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cni-path" (OuterVolumeSpecName: "cni-path") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.496625 kubelet[2490]: I0805 21:52:56.495267 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.496625 kubelet[2490]: I0805 21:52:56.495316 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:52:56.497171 kubelet[2490]: I0805 21:52:56.497041 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4043cf91-2021-417a-9930-5945d81111e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4043cf91-2021-417a-9930-5945d81111e6" (UID: "4043cf91-2021-417a-9930-5945d81111e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 21:52:56.497640 kubelet[2490]: I0805 21:52:56.497421 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4043cf91-2021-417a-9930-5945d81111e6-kube-api-access-dq2lq" (OuterVolumeSpecName: "kube-api-access-dq2lq") pod "4043cf91-2021-417a-9930-5945d81111e6" (UID: "4043cf91-2021-417a-9930-5945d81111e6"). InnerVolumeSpecName "kube-api-access-dq2lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 21:52:56.497856 kubelet[2490]: I0805 21:52:56.497751 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de02f5f4-124f-4eb0-831e-2a80f52dd188-kube-api-access-vkn9n" (OuterVolumeSpecName: "kube-api-access-vkn9n") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "kube-api-access-vkn9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 21:52:56.497856 kubelet[2490]: I0805 21:52:56.497813 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de02f5f4-124f-4eb0-831e-2a80f52dd188-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 21:52:56.498181 kubelet[2490]: I0805 21:52:56.498139 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 21:52:56.498756 kubelet[2490]: I0805 21:52:56.498714 2490 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de02f5f4-124f-4eb0-831e-2a80f52dd188-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de02f5f4-124f-4eb0-831e-2a80f52dd188" (UID: "de02f5f4-124f-4eb0-831e-2a80f52dd188"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 21:52:56.593911 kubelet[2490]: I0805 21:52:56.593777 2490 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.593911 kubelet[2490]: I0805 21:52:56.593809 2490 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.593911 kubelet[2490]: I0805 21:52:56.593822 2490 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.593911 kubelet[2490]: I0805 21:52:56.593834 2490 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de02f5f4-124f-4eb0-831e-2a80f52dd188-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.593911 kubelet[2490]: I0805 21:52:56.593845 2490 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vkn9n\" (UniqueName: \"kubernetes.io/projected/de02f5f4-124f-4eb0-831e-2a80f52dd188-kube-api-access-vkn9n\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.593911 kubelet[2490]: I0805 21:52:56.593855 2490 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.593911 kubelet[2490]: I0805 21:52:56.593865 2490 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dq2lq\" (UniqueName: \"kubernetes.io/projected/4043cf91-2021-417a-9930-5945d81111e6-kube-api-access-dq2lq\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.593911 kubelet[2490]: I0805 21:52:56.593874 2490 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.594202 kubelet[2490]: I0805 21:52:56.593884 2490 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.594202 kubelet[2490]: I0805 21:52:56.593896 2490 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.594202 kubelet[2490]: I0805 21:52:56.593905 2490 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.594202 kubelet[2490]: I0805 21:52:56.593914 2490 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.594202 kubelet[2490]: I0805 21:52:56.593923 2490 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de02f5f4-124f-4eb0-831e-2a80f52dd188-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.594202 kubelet[2490]: I0805 21:52:56.593932 2490 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4043cf91-2021-417a-9930-5945d81111e6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.594202 kubelet[2490]: I0805 21:52:56.593942 2490 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:56.594202 kubelet[2490]: I0805 21:52:56.593957 2490 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de02f5f4-124f-4eb0-831e-2a80f52dd188-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 5 21:52:57.029104 kubelet[2490]: E0805 21:52:57.029074 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:57.036458 systemd[1]: Removed slice kubepods-besteffort-pod4043cf91_2021_417a_9930_5945d81111e6.slice - libcontainer container kubepods-besteffort-pod4043cf91_2021_417a_9930_5945d81111e6.slice. Aug 5 21:52:57.037695 systemd[1]: Removed slice kubepods-burstable-podde02f5f4_124f_4eb0_831e_2a80f52dd188.slice - libcontainer container kubepods-burstable-podde02f5f4_124f_4eb0_831e_2a80f52dd188.slice. Aug 5 21:52:57.037798 systemd[1]: kubepods-burstable-podde02f5f4_124f_4eb0_831e_2a80f52dd188.slice: Consumed 6.896s CPU time. Aug 5 21:52:57.241276 kubelet[2490]: I0805 21:52:57.239967 2490 scope.go:117] "RemoveContainer" containerID="cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5" Aug 5 21:52:57.242446 containerd[1437]: time="2024-08-05T21:52:57.242404520Z" level=info msg="RemoveContainer for \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\"" Aug 5 21:52:57.252288 containerd[1437]: time="2024-08-05T21:52:57.252237196Z" level=info msg="RemoveContainer for \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\" returns successfully" Aug 5 21:52:57.252659 kubelet[2490]: I0805 21:52:57.252567 2490 scope.go:117] "RemoveContainer" containerID="cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5" Aug 5 21:52:57.255767 containerd[1437]: time="2024-08-05T21:52:57.252882564Z" level=error msg="ContainerStatus for \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\": not found" Aug 5 21:52:57.264629 kubelet[2490]: E0805 21:52:57.264410 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\": not found" containerID="cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5" Aug 5 21:52:57.264629 kubelet[2490]: I0805 21:52:57.264518 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5"} err="failed to get container status \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\": rpc error: code = NotFound desc = an error occurred when try to find container \"cef3ea98e3c906dd14c61e73a536b97ffb867f0fbdd3d8bf503cec624637bbf5\": not found" Aug 5 21:52:57.264629 kubelet[2490]: I0805 21:52:57.264532 2490 scope.go:117] "RemoveContainer" containerID="7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2" Aug 5 21:52:57.265810 containerd[1437]: time="2024-08-05T21:52:57.265721476Z" level=info msg="RemoveContainer for \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\"" Aug 5 21:52:57.268598 containerd[1437]: time="2024-08-05T21:52:57.268556270Z" level=info msg="RemoveContainer for \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\" returns successfully" Aug 5 21:52:57.268951 kubelet[2490]: I0805 21:52:57.268857 2490 scope.go:117] "RemoveContainer" containerID="a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79" Aug 5 21:52:57.271233 containerd[1437]: time="2024-08-05T21:52:57.271184221Z" level=info msg="RemoveContainer for \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\"" Aug 5 21:52:57.275566 containerd[1437]: time="2024-08-05T21:52:57.275533632Z" level=info msg="RemoveContainer for \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\" returns successfully" Aug 5 21:52:57.275984 kubelet[2490]: I0805 21:52:57.275885 2490 scope.go:117] "RemoveContainer" containerID="58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454" Aug 5 21:52:57.277284 containerd[1437]: time="2024-08-05T21:52:57.277248933Z" level=info msg="RemoveContainer for \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\"" Aug 5 21:52:57.280011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41-rootfs.mount: Deactivated successfully. Aug 5 21:52:57.280122 systemd[1]: var-lib-kubelet-pods-4043cf91\x2d2021\x2d417a\x2d9930\x2d5945d81111e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddq2lq.mount: Deactivated successfully. Aug 5 21:52:57.280179 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6110ab38853872af6719597062e13425beb92a612789f5fca3bc640f59f8c41-shm.mount: Deactivated successfully. Aug 5 21:52:57.280234 systemd[1]: var-lib-kubelet-pods-de02f5f4\x2d124f\x2d4eb0\x2d831e\x2d2a80f52dd188-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvkn9n.mount: Deactivated successfully. Aug 5 21:52:57.280287 systemd[1]: var-lib-kubelet-pods-de02f5f4\x2d124f\x2d4eb0\x2d831e\x2d2a80f52dd188-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 5 21:52:57.280336 systemd[1]: var-lib-kubelet-pods-de02f5f4\x2d124f\x2d4eb0\x2d831e\x2d2a80f52dd188-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 5 21:52:57.281760 containerd[1437]: time="2024-08-05T21:52:57.281714346Z" level=info msg="RemoveContainer for \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\" returns successfully" Aug 5 21:52:57.281914 kubelet[2490]: I0805 21:52:57.281894 2490 scope.go:117] "RemoveContainer" containerID="62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f" Aug 5 21:52:57.283035 containerd[1437]: time="2024-08-05T21:52:57.282947080Z" level=info msg="RemoveContainer for \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\"" Aug 5 21:52:57.285288 containerd[1437]: time="2024-08-05T21:52:57.285243548Z" level=info msg="RemoveContainer for \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\" returns successfully" Aug 5 21:52:57.285517 kubelet[2490]: I0805 21:52:57.285391 2490 scope.go:117] "RemoveContainer" containerID="b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab" Aug 5 21:52:57.286474 containerd[1437]: time="2024-08-05T21:52:57.286414441Z" level=info msg="RemoveContainer for \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\"" Aug 5 21:52:57.288863 containerd[1437]: time="2024-08-05T21:52:57.288812550Z" level=info msg="RemoveContainer for \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\" returns successfully" Aug 5 21:52:57.289232 kubelet[2490]: I0805 21:52:57.289137 2490 scope.go:117] "RemoveContainer" containerID="7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2" Aug 5 21:52:57.289507 containerd[1437]: time="2024-08-05T21:52:57.289291716Z" level=error msg="ContainerStatus for \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\": not found" Aug 5 21:52:57.289556 kubelet[2490]: E0805 21:52:57.289402 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\": not found" containerID="7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2" Aug 5 21:52:57.289556 kubelet[2490]: I0805 21:52:57.289432 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2"} err="failed to get container status \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ae17f0610ecbb67d4ea191072dbd9a2a78b7efb022dfe36e04a35dfd7c71fd2\": not found" Aug 5 21:52:57.289556 kubelet[2490]: I0805 21:52:57.289444 2490 scope.go:117] "RemoveContainer" containerID="a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79" Aug 5 21:52:57.289651 containerd[1437]: time="2024-08-05T21:52:57.289577279Z" level=error msg="ContainerStatus for \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\": not found" Aug 5 21:52:57.289934 kubelet[2490]: E0805 21:52:57.289815 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\": not found" containerID="a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79" Aug 5 21:52:57.289934 kubelet[2490]: I0805 21:52:57.289853 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79"} err="failed to get container status \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\": rpc error: code = NotFound desc = an error occurred when try to find container \"a09647b16fc27201ac473a19e454888fe5d3d10bfdad0861e154be472714cb79\": not found" Aug 5 21:52:57.289934 kubelet[2490]: I0805 21:52:57.289864 2490 scope.go:117] "RemoveContainer" containerID="58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454" Aug 5 21:52:57.290162 containerd[1437]: time="2024-08-05T21:52:57.289993924Z" level=error msg="ContainerStatus for \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\": not found" Aug 5 21:52:57.290447 kubelet[2490]: E0805 21:52:57.290263 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\": not found" containerID="58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454" Aug 5 21:52:57.290447 kubelet[2490]: I0805 21:52:57.290291 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454"} err="failed to get container status \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\": rpc error: code = NotFound desc = an error occurred when try to find container \"58ba6b4832f138788f25cee2d7f3c5a5c38f1d7e2664ab0fc682b0ee62275454\": not found" Aug 5 21:52:57.290447 kubelet[2490]: I0805 21:52:57.290302 2490 scope.go:117] "RemoveContainer" containerID="62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f" Aug 5 21:52:57.290684 containerd[1437]: time="2024-08-05T21:52:57.290406409Z" level=error msg="ContainerStatus for \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\": not found" Aug 5 21:52:57.290715 kubelet[2490]: E0805 21:52:57.290536 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\": not found" containerID="62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f" Aug 5 21:52:57.290715 kubelet[2490]: I0805 21:52:57.290558 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f"} err="failed to get container status \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"62bf54a003a11417ed0a5238e67c49f179786cfecf26a7ab85b1f66f18378a1f\": not found" Aug 5 21:52:57.290715 kubelet[2490]: I0805 21:52:57.290566 2490 scope.go:117] "RemoveContainer" containerID="b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab" Aug 5 21:52:57.291183 containerd[1437]: time="2024-08-05T21:52:57.290940535Z" level=error msg="ContainerStatus for \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\": not found" Aug 5 21:52:57.291247 kubelet[2490]: E0805 21:52:57.291085 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\": not found" containerID="b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab" Aug 5 21:52:57.291247 kubelet[2490]: I0805 21:52:57.291171 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab"} err="failed to get container status \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\": rpc error: code = NotFound desc = an error occurred when try to find container \"b32144316fdee054de79e1801c84fbc14e06c2cc4d42dd43fb391f50904dabab\": not found" Aug 5 21:52:58.097816 kubelet[2490]: E0805 21:52:58.097776 2490 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 5 21:52:58.215235 sshd[4108]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:58.228477 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:34498.service: Deactivated successfully. Aug 5 21:52:58.230144 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 21:52:58.230350 systemd[1]: session-22.scope: Consumed 1.405s CPU time. Aug 5 21:52:58.231477 systemd-logind[1419]: Session 22 logged out. Waiting for processes to exit. Aug 5 21:52:58.232827 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:34514.service - OpenSSH per-connection server daemon (10.0.0.1:34514). Aug 5 21:52:58.233960 systemd-logind[1419]: Removed session 22. Aug 5 21:52:58.277528 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 34514 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:58.278939 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:58.283559 systemd-logind[1419]: New session 23 of user core. Aug 5 21:52:58.292954 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 21:52:59.031848 kubelet[2490]: I0805 21:52:59.031030 2490 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4043cf91-2021-417a-9930-5945d81111e6" path="/var/lib/kubelet/pods/4043cf91-2021-417a-9930-5945d81111e6/volumes" Aug 5 21:52:59.031848 kubelet[2490]: I0805 21:52:59.031414 2490 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="de02f5f4-124f-4eb0-831e-2a80f52dd188" path="/var/lib/kubelet/pods/de02f5f4-124f-4eb0-831e-2a80f52dd188/volumes" Aug 5 21:52:59.557545 sshd[4270]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:59.566713 kubelet[2490]: I0805 21:52:59.566675 2490 topology_manager.go:215] "Topology Admit Handler" podUID="48881033-2e65-405f-ac69-6b35bf6f984e" podNamespace="kube-system" podName="cilium-4mmkd" Aug 5 21:52:59.567176 kubelet[2490]: E0805 21:52:59.566736 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de02f5f4-124f-4eb0-831e-2a80f52dd188" containerName="mount-cgroup" Aug 5 21:52:59.567176 kubelet[2490]: E0805 21:52:59.566762 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de02f5f4-124f-4eb0-831e-2a80f52dd188" containerName="apply-sysctl-overwrites" Aug 5 21:52:59.567176 kubelet[2490]: E0805 21:52:59.566772 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4043cf91-2021-417a-9930-5945d81111e6" containerName="cilium-operator" Aug 5 21:52:59.567176 kubelet[2490]: E0805 21:52:59.566781 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de02f5f4-124f-4eb0-831e-2a80f52dd188" containerName="mount-bpf-fs" Aug 5 21:52:59.567176 kubelet[2490]: E0805 21:52:59.566787 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de02f5f4-124f-4eb0-831e-2a80f52dd188" containerName="clean-cilium-state" Aug 5 21:52:59.567176 kubelet[2490]: E0805 21:52:59.566794 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de02f5f4-124f-4eb0-831e-2a80f52dd188" containerName="cilium-agent" Aug 5 21:52:59.567176 kubelet[2490]: I0805 21:52:59.566815 2490 memory_manager.go:346] "RemoveStaleState removing state" podUID="4043cf91-2021-417a-9930-5945d81111e6" containerName="cilium-operator" Aug 5 21:52:59.567176 kubelet[2490]: I0805 21:52:59.566822 2490 memory_manager.go:346] "RemoveStaleState removing state" podUID="de02f5f4-124f-4eb0-831e-2a80f52dd188" containerName="cilium-agent" Aug 5 21:52:59.569053 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:34514.service: Deactivated successfully. Aug 5 21:52:59.573904 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 21:52:59.574107 systemd[1]: session-23.scope: Consumed 1.181s CPU time. Aug 5 21:52:59.576449 systemd-logind[1419]: Session 23 logged out. Waiting for processes to exit. Aug 5 21:52:59.588104 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:34526.service - OpenSSH per-connection server daemon (10.0.0.1:34526). Aug 5 21:52:59.591791 systemd-logind[1419]: Removed session 23. Aug 5 21:52:59.598930 systemd[1]: Created slice kubepods-burstable-pod48881033_2e65_405f_ac69_6b35bf6f984e.slice - libcontainer container kubepods-burstable-pod48881033_2e65_405f_ac69_6b35bf6f984e.slice. Aug 5 21:52:59.613355 kubelet[2490]: I0805 21:52:59.613326 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48881033-2e65-405f-ac69-6b35bf6f984e-cilium-config-path\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.613916 kubelet[2490]: I0805 21:52:59.613861 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-cni-path\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.613916 kubelet[2490]: I0805 21:52:59.613897 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-etc-cni-netd\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614130 kubelet[2490]: I0805 21:52:59.614049 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-xtables-lock\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614337 kubelet[2490]: I0805 21:52:59.614213 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/48881033-2e65-405f-ac69-6b35bf6f984e-clustermesh-secrets\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614337 kubelet[2490]: I0805 21:52:59.614293 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-bpf-maps\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614337 kubelet[2490]: I0805 21:52:59.614317 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-host-proc-sys-kernel\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614590 kubelet[2490]: I0805 21:52:59.614468 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-lib-modules\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614590 kubelet[2490]: I0805 21:52:59.614496 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-hostproc\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614590 kubelet[2490]: I0805 21:52:59.614539 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-cilium-cgroup\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614590 kubelet[2490]: I0805 21:52:59.614559 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/48881033-2e65-405f-ac69-6b35bf6f984e-cilium-ipsec-secrets\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614912 kubelet[2490]: I0805 21:52:59.614704 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh455\" (UniqueName: \"kubernetes.io/projected/48881033-2e65-405f-ac69-6b35bf6f984e-kube-api-access-dh455\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614912 kubelet[2490]: I0805 21:52:59.614779 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-host-proc-sys-net\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614912 kubelet[2490]: I0805 21:52:59.614811 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/48881033-2e65-405f-ac69-6b35bf6f984e-hubble-tls\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.614912 kubelet[2490]: I0805 21:52:59.614859 2490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/48881033-2e65-405f-ac69-6b35bf6f984e-cilium-run\") pod \"cilium-4mmkd\" (UID: \"48881033-2e65-405f-ac69-6b35bf6f984e\") " pod="kube-system/cilium-4mmkd" Aug 5 21:52:59.631831 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 34526 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:59.633173 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:59.636861 systemd-logind[1419]: New session 24 of user core. Aug 5 21:52:59.651932 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 21:52:59.701760 sshd[4283]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:59.711807 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:34526.service: Deactivated successfully. Aug 5 21:52:59.713656 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 21:52:59.716455 systemd-logind[1419]: Session 24 logged out. Waiting for processes to exit. Aug 5 21:52:59.724190 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:34538.service - OpenSSH per-connection server daemon (10.0.0.1:34538). Aug 5 21:52:59.735211 systemd-logind[1419]: Removed session 24. Aug 5 21:52:59.757241 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 34538 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:59.758521 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:59.765759 systemd-logind[1419]: New session 25 of user core. Aug 5 21:52:59.773014 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 21:52:59.904502 kubelet[2490]: E0805 21:52:59.904452 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:59.905130 containerd[1437]: time="2024-08-05T21:52:59.905002601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mmkd,Uid:48881033-2e65-405f-ac69-6b35bf6f984e,Namespace:kube-system,Attempt:0,}" Aug 5 21:52:59.924519 containerd[1437]: time="2024-08-05T21:52:59.923783807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:59.924519 containerd[1437]: time="2024-08-05T21:52:59.924334933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:59.924519 containerd[1437]: time="2024-08-05T21:52:59.924351933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:59.924519 containerd[1437]: time="2024-08-05T21:52:59.924361493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:59.943919 systemd[1]: Started cri-containerd-d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95.scope - libcontainer container d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95. Aug 5 21:52:59.963536 containerd[1437]: time="2024-08-05T21:52:59.963227719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mmkd,Uid:48881033-2e65-405f-ac69-6b35bf6f984e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\"" Aug 5 21:52:59.963870 kubelet[2490]: E0805 21:52:59.963843 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:59.966256 containerd[1437]: time="2024-08-05T21:52:59.966134390Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 5 21:52:59.977755 containerd[1437]: time="2024-08-05T21:52:59.977708237Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"72f6be4c475dbd0ed441e692681f7f409c8e276ebb8e70b651e56010677d41f2\"" Aug 5 21:52:59.979243 containerd[1437]: time="2024-08-05T21:52:59.978824529Z" level=info msg="StartContainer for \"72f6be4c475dbd0ed441e692681f7f409c8e276ebb8e70b651e56010677d41f2\"" Aug 5 21:53:00.005963 systemd[1]: Started cri-containerd-72f6be4c475dbd0ed441e692681f7f409c8e276ebb8e70b651e56010677d41f2.scope - libcontainer container 72f6be4c475dbd0ed441e692681f7f409c8e276ebb8e70b651e56010677d41f2. Aug 5 21:53:00.027645 containerd[1437]: time="2024-08-05T21:53:00.027537091Z" level=info msg="StartContainer for \"72f6be4c475dbd0ed441e692681f7f409c8e276ebb8e70b651e56010677d41f2\" returns successfully" Aug 5 21:53:00.029117 kubelet[2490]: E0805 21:53:00.029091 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:00.038588 systemd[1]: cri-containerd-72f6be4c475dbd0ed441e692681f7f409c8e276ebb8e70b651e56010677d41f2.scope: Deactivated successfully. Aug 5 21:53:00.081849 containerd[1437]: time="2024-08-05T21:53:00.081790142Z" level=info msg="shim disconnected" id=72f6be4c475dbd0ed441e692681f7f409c8e276ebb8e70b651e56010677d41f2 namespace=k8s.io Aug 5 21:53:00.082332 containerd[1437]: time="2024-08-05T21:53:00.082163626Z" level=warning msg="cleaning up after shim disconnected" id=72f6be4c475dbd0ed441e692681f7f409c8e276ebb8e70b651e56010677d41f2 namespace=k8s.io Aug 5 21:53:00.082332 containerd[1437]: time="2024-08-05T21:53:00.082206586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:53:00.255512 kubelet[2490]: E0805 21:53:00.254921 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:00.257749 containerd[1437]: time="2024-08-05T21:53:00.257678793Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 5 21:53:00.272786 containerd[1437]: time="2024-08-05T21:53:00.272674870Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7a57754290ff6f9a9f37311221910689fc5dfc2c0d99599c262a185cb1b8ce6b\"" Aug 5 21:53:00.273362 containerd[1437]: time="2024-08-05T21:53:00.273327637Z" level=info msg="StartContainer for \"7a57754290ff6f9a9f37311221910689fc5dfc2c0d99599c262a185cb1b8ce6b\"" Aug 5 21:53:00.295914 systemd[1]: Started cri-containerd-7a57754290ff6f9a9f37311221910689fc5dfc2c0d99599c262a185cb1b8ce6b.scope - libcontainer container 7a57754290ff6f9a9f37311221910689fc5dfc2c0d99599c262a185cb1b8ce6b. Aug 5 21:53:00.322730 containerd[1437]: time="2024-08-05T21:53:00.322649876Z" level=info msg="StartContainer for \"7a57754290ff6f9a9f37311221910689fc5dfc2c0d99599c262a185cb1b8ce6b\" returns successfully" Aug 5 21:53:00.327191 systemd[1]: cri-containerd-7a57754290ff6f9a9f37311221910689fc5dfc2c0d99599c262a185cb1b8ce6b.scope: Deactivated successfully. Aug 5 21:53:00.350009 containerd[1437]: time="2024-08-05T21:53:00.349807202Z" level=info msg="shim disconnected" id=7a57754290ff6f9a9f37311221910689fc5dfc2c0d99599c262a185cb1b8ce6b namespace=k8s.io Aug 5 21:53:00.350009 containerd[1437]: time="2024-08-05T21:53:00.349862042Z" level=warning msg="cleaning up after shim disconnected" id=7a57754290ff6f9a9f37311221910689fc5dfc2c0d99599c262a185cb1b8ce6b namespace=k8s.io Aug 5 21:53:00.350009 containerd[1437]: time="2024-08-05T21:53:00.349870403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:53:01.259676 kubelet[2490]: E0805 21:53:01.259625 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:01.263625 containerd[1437]: time="2024-08-05T21:53:01.263568666Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 5 21:53:01.286759 containerd[1437]: time="2024-08-05T21:53:01.286631979Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be\"" Aug 5 21:53:01.289385 containerd[1437]: time="2024-08-05T21:53:01.289351527Z" level=info msg="StartContainer for \"f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be\"" Aug 5 21:53:01.335066 systemd[1]: Started cri-containerd-f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be.scope - libcontainer container f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be. Aug 5 21:53:01.359065 containerd[1437]: time="2024-08-05T21:53:01.359013311Z" level=info msg="StartContainer for \"f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be\" returns successfully" Aug 5 21:53:01.361270 systemd[1]: cri-containerd-f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be.scope: Deactivated successfully. Aug 5 21:53:01.387630 containerd[1437]: time="2024-08-05T21:53:01.387575479Z" level=info msg="shim disconnected" id=f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be namespace=k8s.io Aug 5 21:53:01.387630 containerd[1437]: time="2024-08-05T21:53:01.387623400Z" level=warning msg="cleaning up after shim disconnected" id=f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be namespace=k8s.io Aug 5 21:53:01.387630 containerd[1437]: time="2024-08-05T21:53:01.387631880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:53:01.726089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f22e88c6a3115b361ded8439e875674504524d58a9a4340bc101b79400a831be-rootfs.mount: Deactivated successfully. Aug 5 21:53:02.261854 kubelet[2490]: E0805 21:53:02.261819 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:02.263916 containerd[1437]: time="2024-08-05T21:53:02.263870588Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 5 21:53:02.278035 containerd[1437]: time="2024-08-05T21:53:02.277978325Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361\"" Aug 5 21:53:02.280947 containerd[1437]: time="2024-08-05T21:53:02.280903873Z" level=info msg="StartContainer for \"bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361\"" Aug 5 21:53:02.306910 systemd[1]: Started cri-containerd-bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361.scope - libcontainer container bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361. Aug 5 21:53:02.325887 systemd[1]: cri-containerd-bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361.scope: Deactivated successfully. Aug 5 21:53:02.329788 containerd[1437]: time="2024-08-05T21:53:02.329489105Z" level=info msg="StartContainer for \"bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361\" returns successfully" Aug 5 21:53:02.346988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361-rootfs.mount: Deactivated successfully. Aug 5 21:53:02.357751 containerd[1437]: time="2024-08-05T21:53:02.357665978Z" level=info msg="shim disconnected" id=bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361 namespace=k8s.io Aug 5 21:53:02.357751 containerd[1437]: time="2024-08-05T21:53:02.357730899Z" level=warning msg="cleaning up after shim disconnected" id=bbbeeeeddf53e84b45b73c821466f2e13f11ec23f95791f9f4d3953f49730361 namespace=k8s.io Aug 5 21:53:02.357751 containerd[1437]: time="2024-08-05T21:53:02.357749899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:53:03.099255 kubelet[2490]: E0805 21:53:03.099218 2490 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 5 21:53:03.265620 kubelet[2490]: E0805 21:53:03.265584 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:03.270639 containerd[1437]: time="2024-08-05T21:53:03.270546690Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 5 21:53:03.292658 containerd[1437]: time="2024-08-05T21:53:03.292351293Z" level=info msg="CreateContainer within sandbox \"d089566cbd388476b99f58a129ad3fab8f5ab8a87032ea071cbf8a975e2c5a95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9e3c331f60fba09f3eff460604eaa730bfcb02044caa78651fc1c3dca70294f\"" Aug 5 21:53:03.293064 containerd[1437]: time="2024-08-05T21:53:03.293038899Z" level=info msg="StartContainer for \"b9e3c331f60fba09f3eff460604eaa730bfcb02044caa78651fc1c3dca70294f\"" Aug 5 21:53:03.317934 systemd[1]: Started cri-containerd-b9e3c331f60fba09f3eff460604eaa730bfcb02044caa78651fc1c3dca70294f.scope - libcontainer container b9e3c331f60fba09f3eff460604eaa730bfcb02044caa78651fc1c3dca70294f. Aug 5 21:53:03.357509 containerd[1437]: time="2024-08-05T21:53:03.357399659Z" level=info msg="StartContainer for \"b9e3c331f60fba09f3eff460604eaa730bfcb02044caa78651fc1c3dca70294f\" returns successfully" Aug 5 21:53:03.637762 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 5 21:53:04.270686 kubelet[2490]: E0805 21:53:04.270656 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:04.286466 kubelet[2490]: I0805 21:53:04.286197 2490 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4mmkd" podStartSLOduration=5.286159039 podCreationTimestamp="2024-08-05 21:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:53:04.284139341 +0000 UTC m=+81.362061316" watchObservedRunningTime="2024-08-05 21:53:04.286159039 +0000 UTC m=+81.364080974" Aug 5 21:53:04.603774 kubelet[2490]: I0805 21:53:04.603643 2490 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-08-05T21:53:04Z","lastTransitionTime":"2024-08-05T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 5 21:53:05.908188 kubelet[2490]: E0805 21:53:05.908147 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:06.423103 systemd-networkd[1370]: lxc_health: Link UP Aug 5 21:53:06.431149 systemd-networkd[1370]: lxc_health: Gained carrier Aug 5 21:53:07.721919 systemd-networkd[1370]: lxc_health: Gained IPv6LL Aug 5 21:53:07.907358 kubelet[2490]: E0805 21:53:07.907044 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:08.278471 kubelet[2490]: E0805 21:53:08.278381 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:12.405846 sshd[4293]: pam_unix(sshd:session): session closed for user core Aug 5 21:53:12.409446 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:34538.service: Deactivated successfully. Aug 5 21:53:12.412884 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 21:53:12.414502 systemd-logind[1419]: Session 25 logged out. Waiting for processes to exit. Aug 5 21:53:12.415484 systemd-logind[1419]: Removed session 25.