Aug 5 22:00:41.903414 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 22:00:41.903433 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 22:00:41.903443 kernel: KASLR enabled Aug 5 22:00:41.903448 kernel: efi: EFI v2.7 by EDK II Aug 5 22:00:41.903454 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 5 22:00:41.903460 kernel: random: crng init done Aug 5 22:00:41.903467 kernel: ACPI: Early table checksum verification disabled Aug 5 22:00:41.903473 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 5 22:00:41.903479 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 5 22:00:41.903486 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903492 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903498 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903504 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903511 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903518 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903525 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903532 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903538 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:00:41.903544 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 5 22:00:41.903550 kernel: NUMA: Failed to initialise from firmware Aug 5 22:00:41.903557 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:00:41.903563 kernel: NUMA: NODE_DATA [mem 0xdc95b800-0xdc960fff] Aug 5 22:00:41.903569 kernel: Zone ranges: Aug 5 22:00:41.903575 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:00:41.903582 kernel: DMA32 empty Aug 5 22:00:41.903589 kernel: Normal empty Aug 5 22:00:41.903595 kernel: Movable zone start for each node Aug 5 22:00:41.903601 kernel: Early memory node ranges Aug 5 22:00:41.903608 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 5 22:00:41.903614 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 5 22:00:41.903620 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 5 22:00:41.903626 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 5 22:00:41.903633 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 5 22:00:41.903639 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 5 22:00:41.903645 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 5 22:00:41.903651 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:00:41.903657 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 5 22:00:41.903665 kernel: psci: probing for conduit method from ACPI. Aug 5 22:00:41.903671 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 22:00:41.903678 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 22:00:41.903686 kernel: psci: Trusted OS migration not required Aug 5 22:00:41.903693 kernel: psci: SMC Calling Convention v1.1 Aug 5 22:00:41.903700 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 5 22:00:41.903708 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 22:00:41.903715 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 22:00:41.903722 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 5 22:00:41.903728 kernel: Detected PIPT I-cache on CPU0 Aug 5 22:00:41.903735 kernel: CPU features: detected: GIC system register CPU interface Aug 5 22:00:41.903742 kernel: CPU features: detected: Hardware dirty bit management Aug 5 22:00:41.903749 kernel: CPU features: detected: Spectre-v4 Aug 5 22:00:41.903755 kernel: CPU features: detected: Spectre-BHB Aug 5 22:00:41.903762 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 22:00:41.903769 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 22:00:41.903777 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 22:00:41.903784 kernel: alternatives: applying boot alternatives Aug 5 22:00:41.903791 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 22:00:41.903799 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:00:41.903805 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:00:41.903812 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:00:41.903826 kernel: Fallback order for Node 0: 0 Aug 5 22:00:41.903833 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 5 22:00:41.903839 kernel: Policy zone: DMA Aug 5 22:00:41.903846 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:00:41.903860 kernel: software IO TLB: area num 4. Aug 5 22:00:41.903870 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 5 22:00:41.903877 kernel: Memory: 2386864K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185424K reserved, 0K cma-reserved) Aug 5 22:00:41.903884 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 22:00:41.903890 kernel: trace event string verifier disabled Aug 5 22:00:41.903897 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:00:41.903904 kernel: rcu: RCU event tracing is enabled. Aug 5 22:00:41.903911 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 22:00:41.903918 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:00:41.903925 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:00:41.903932 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:00:41.903939 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 22:00:41.903945 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 22:00:41.903953 kernel: GICv3: 256 SPIs implemented Aug 5 22:00:41.903960 kernel: GICv3: 0 Extended SPIs implemented Aug 5 22:00:41.903967 kernel: Root IRQ handler: gic_handle_irq Aug 5 22:00:41.903973 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 22:00:41.903980 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 5 22:00:41.903987 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 5 22:00:41.903994 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 22:00:41.904001 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Aug 5 22:00:41.904008 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 5 22:00:41.904014 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 5 22:00:41.904021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:00:41.904030 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:00:41.904037 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 22:00:41.904043 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 22:00:41.904050 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 22:00:41.904057 kernel: arm-pv: using stolen time PV Aug 5 22:00:41.904064 kernel: Console: colour dummy device 80x25 Aug 5 22:00:41.904071 kernel: ACPI: Core revision 20230628 Aug 5 22:00:41.904078 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 22:00:41.904085 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:00:41.904092 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:00:41.904100 kernel: SELinux: Initializing. Aug 5 22:00:41.904107 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:00:41.904114 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:00:41.904121 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:00:41.904128 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:00:41.904135 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:00:41.904142 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:00:41.904149 kernel: Platform MSI: ITS@0x8080000 domain created Aug 5 22:00:41.904155 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 5 22:00:41.904164 kernel: Remapping and enabling EFI services. Aug 5 22:00:41.904171 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:00:41.904178 kernel: Detected PIPT I-cache on CPU1 Aug 5 22:00:41.904185 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 5 22:00:41.904192 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 5 22:00:41.904199 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:00:41.904205 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 22:00:41.904213 kernel: Detected PIPT I-cache on CPU2 Aug 5 22:00:41.904220 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 5 22:00:41.904226 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 5 22:00:41.904235 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:00:41.904242 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 5 22:00:41.904253 kernel: Detected PIPT I-cache on CPU3 Aug 5 22:00:41.904261 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 5 22:00:41.904269 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 5 22:00:41.904276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:00:41.904283 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 5 22:00:41.904290 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 22:00:41.904297 kernel: SMP: Total of 4 processors activated. Aug 5 22:00:41.904306 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 22:00:41.904313 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 22:00:41.904320 kernel: CPU features: detected: Common not Private translations Aug 5 22:00:41.904328 kernel: CPU features: detected: CRC32 instructions Aug 5 22:00:41.904335 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 5 22:00:41.904342 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 22:00:41.904349 kernel: CPU features: detected: LSE atomic instructions Aug 5 22:00:41.904357 kernel: CPU features: detected: Privileged Access Never Aug 5 22:00:41.904365 kernel: CPU features: detected: RAS Extension Support Aug 5 22:00:41.904372 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 5 22:00:41.904379 kernel: CPU: All CPU(s) started at EL1 Aug 5 22:00:41.904387 kernel: alternatives: applying system-wide alternatives Aug 5 22:00:41.904394 kernel: devtmpfs: initialized Aug 5 22:00:41.904401 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:00:41.904409 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 22:00:41.904416 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:00:41.904423 kernel: SMBIOS 3.0.0 present. Aug 5 22:00:41.904432 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 5 22:00:41.904439 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:00:41.904446 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 22:00:41.904454 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 22:00:41.904461 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 22:00:41.904468 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:00:41.904476 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Aug 5 22:00:41.904483 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:00:41.904490 kernel: cpuidle: using governor menu Aug 5 22:00:41.904499 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 22:00:41.904506 kernel: ASID allocator initialised with 32768 entries Aug 5 22:00:41.904513 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:00:41.904520 kernel: Serial: AMBA PL011 UART driver Aug 5 22:00:41.904528 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 22:00:41.904535 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 22:00:41.904542 kernel: Modules: 509120 pages in range for PLT usage Aug 5 22:00:41.904549 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:00:41.904556 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:00:41.904565 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 22:00:41.904572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 22:00:41.904579 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:00:41.904587 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:00:41.904594 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 22:00:41.904601 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 22:00:41.904608 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:00:41.904615 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:00:41.904622 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:00:41.904631 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:00:41.904638 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:00:41.904645 kernel: ACPI: Interpreter enabled Aug 5 22:00:41.904652 kernel: ACPI: Using GIC for interrupt routing Aug 5 22:00:41.904660 kernel: ACPI: MCFG table detected, 1 entries Aug 5 22:00:41.904667 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 5 22:00:41.904674 kernel: printk: console [ttyAMA0] enabled Aug 5 22:00:41.904681 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:00:41.906708 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:00:41.906787 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 22:00:41.906876 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 22:00:41.906943 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 5 22:00:41.907008 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 5 22:00:41.907018 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 5 22:00:41.907025 kernel: PCI host bridge to bus 0000:00 Aug 5 22:00:41.907109 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 5 22:00:41.907191 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 22:00:41.907251 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 5 22:00:41.907310 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:00:41.907390 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 5 22:00:41.907465 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:00:41.907532 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 5 22:00:41.907599 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 5 22:00:41.907664 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 22:00:41.907728 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 22:00:41.907791 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 5 22:00:41.907886 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 5 22:00:41.907948 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 5 22:00:41.908017 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 22:00:41.908078 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 5 22:00:41.908087 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 22:00:41.908095 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 22:00:41.908102 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 22:00:41.908109 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 22:00:41.908121 kernel: iommu: Default domain type: Translated Aug 5 22:00:41.908129 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 22:00:41.908136 kernel: efivars: Registered efivars operations Aug 5 22:00:41.908143 kernel: vgaarb: loaded Aug 5 22:00:41.908152 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 22:00:41.908160 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:00:41.908167 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:00:41.908174 kernel: pnp: PnP ACPI init Aug 5 22:00:41.908249 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 5 22:00:41.908260 kernel: pnp: PnP ACPI: found 1 devices Aug 5 22:00:41.908267 kernel: NET: Registered PF_INET protocol family Aug 5 22:00:41.908275 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:00:41.908284 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 22:00:41.908292 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:00:41.908299 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:00:41.908307 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 22:00:41.908314 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 22:00:41.908321 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:00:41.908329 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:00:41.908336 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:00:41.908343 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:00:41.908352 kernel: kvm [1]: HYP mode not available Aug 5 22:00:41.908359 kernel: Initialise system trusted keyrings Aug 5 22:00:41.908366 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 22:00:41.908373 kernel: Key type asymmetric registered Aug 5 22:00:41.908381 kernel: Asymmetric key parser 'x509' registered Aug 5 22:00:41.908388 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 22:00:41.908395 kernel: io scheduler mq-deadline registered Aug 5 22:00:41.908402 kernel: io scheduler kyber registered Aug 5 22:00:41.908409 kernel: io scheduler bfq registered Aug 5 22:00:41.908418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 22:00:41.908425 kernel: ACPI: button: Power Button [PWRB] Aug 5 22:00:41.908433 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 22:00:41.908497 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 5 22:00:41.908507 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:00:41.908514 kernel: thunder_xcv, ver 1.0 Aug 5 22:00:41.908522 kernel: thunder_bgx, ver 1.0 Aug 5 22:00:41.908529 kernel: nicpf, ver 1.0 Aug 5 22:00:41.908536 kernel: nicvf, ver 1.0 Aug 5 22:00:41.908623 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 22:00:41.908685 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T22:00:41 UTC (1722895241) Aug 5 22:00:41.908694 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 22:00:41.908702 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 5 22:00:41.908709 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 22:00:41.908716 kernel: watchdog: Hard watchdog permanently disabled Aug 5 22:00:41.908724 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:00:41.908731 kernel: Segment Routing with IPv6 Aug 5 22:00:41.908741 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:00:41.908748 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:00:41.908755 kernel: Key type dns_resolver registered Aug 5 22:00:41.908762 kernel: registered taskstats version 1 Aug 5 22:00:41.908769 kernel: Loading compiled-in X.509 certificates Aug 5 22:00:41.908777 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 22:00:41.908784 kernel: Key type .fscrypt registered Aug 5 22:00:41.908791 kernel: Key type fscrypt-provisioning registered Aug 5 22:00:41.908798 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:00:41.908807 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:00:41.908821 kernel: ima: No architecture policies found Aug 5 22:00:41.908829 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 22:00:41.908836 kernel: clk: Disabling unused clocks Aug 5 22:00:41.908844 kernel: Freeing unused kernel memory: 39040K Aug 5 22:00:41.908851 kernel: Run /init as init process Aug 5 22:00:41.908879 kernel: with arguments: Aug 5 22:00:41.908886 kernel: /init Aug 5 22:00:41.908894 kernel: with environment: Aug 5 22:00:41.908903 kernel: HOME=/ Aug 5 22:00:41.908910 kernel: TERM=linux Aug 5 22:00:41.908917 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:00:41.908927 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:00:41.908936 systemd[1]: Detected virtualization kvm. Aug 5 22:00:41.908944 systemd[1]: Detected architecture arm64. Aug 5 22:00:41.908951 systemd[1]: Running in initrd. Aug 5 22:00:41.908959 systemd[1]: No hostname configured, using default hostname. Aug 5 22:00:41.908968 systemd[1]: Hostname set to . Aug 5 22:00:41.908976 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:00:41.908983 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:00:41.908991 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:00:41.908999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:00:41.909007 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:00:41.909015 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:00:41.909023 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:00:41.909032 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:00:41.909041 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:00:41.909049 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:00:41.909057 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:00:41.909065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:00:41.909073 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:00:41.909082 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:00:41.909090 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:00:41.909098 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:00:41.909106 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:00:41.909113 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:00:41.909121 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:00:41.909129 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:00:41.909137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:00:41.909144 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:00:41.909154 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:00:41.909161 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:00:41.909169 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:00:41.909177 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:00:41.909185 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:00:41.909193 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:00:41.909201 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:00:41.909209 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:00:41.909216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:00:41.909226 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:00:41.909234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:00:41.909241 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:00:41.909250 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:00:41.909259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:00:41.909267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:00:41.909292 systemd-journald[237]: Collecting audit messages is disabled. Aug 5 22:00:41.909311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:00:41.909321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:00:41.909329 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:00:41.909338 systemd-journald[237]: Journal started Aug 5 22:00:41.909355 systemd-journald[237]: Runtime Journal (/run/log/journal/fcc1a3352e4d44a79d7e79abadc7c1b0) is 5.9M, max 47.3M, 41.4M free. Aug 5 22:00:41.890678 systemd-modules-load[238]: Inserted module 'overlay' Aug 5 22:00:41.912602 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:00:41.913155 systemd-modules-load[238]: Inserted module 'br_netfilter' Aug 5 22:00:41.914002 kernel: Bridge firewalling registered Aug 5 22:00:41.915116 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:00:41.916416 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:00:41.928008 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:00:41.929508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:00:41.930985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:00:41.935402 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:00:41.937708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:00:41.940477 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:00:41.944354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:00:41.950659 dracut-cmdline[273]: dracut-dracut-053 Aug 5 22:00:41.953440 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 22:00:41.971034 systemd-resolved[279]: Positive Trust Anchors: Aug 5 22:00:41.971047 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:00:41.971078 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:00:41.975671 systemd-resolved[279]: Defaulting to hostname 'linux'. Aug 5 22:00:41.978479 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:00:41.979524 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:00:42.023883 kernel: SCSI subsystem initialized Aug 5 22:00:42.028877 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:00:42.036883 kernel: iscsi: registered transport (tcp) Aug 5 22:00:42.049962 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:00:42.050000 kernel: QLogic iSCSI HBA Driver Aug 5 22:00:42.095081 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:00:42.105115 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:00:42.124557 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:00:42.124608 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:00:42.126204 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:00:42.172896 kernel: raid6: neonx8 gen() 15726 MB/s Aug 5 22:00:42.189897 kernel: raid6: neonx4 gen() 15622 MB/s Aug 5 22:00:42.206876 kernel: raid6: neonx2 gen() 13201 MB/s Aug 5 22:00:42.223896 kernel: raid6: neonx1 gen() 10415 MB/s Aug 5 22:00:42.240875 kernel: raid6: int64x8 gen() 6914 MB/s Aug 5 22:00:42.257888 kernel: raid6: int64x4 gen() 7291 MB/s Aug 5 22:00:42.274883 kernel: raid6: int64x2 gen() 6108 MB/s Aug 5 22:00:42.291964 kernel: raid6: int64x1 gen() 5044 MB/s Aug 5 22:00:42.292007 kernel: raid6: using algorithm neonx8 gen() 15726 MB/s Aug 5 22:00:42.309897 kernel: raid6: .... xor() 12023 MB/s, rmw enabled Aug 5 22:00:42.309934 kernel: raid6: using neon recovery algorithm Aug 5 22:00:42.316879 kernel: xor: measuring software checksum speed Aug 5 22:00:42.317873 kernel: 8regs : 19830 MB/sec Aug 5 22:00:42.318868 kernel: 32regs : 19654 MB/sec Aug 5 22:00:42.319984 kernel: arm64_neon : 27197 MB/sec Aug 5 22:00:42.319996 kernel: xor: using function: arm64_neon (27197 MB/sec) Aug 5 22:00:42.377105 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:00:42.387755 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:00:42.401234 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:00:42.412971 systemd-udevd[460]: Using default interface naming scheme 'v255'. Aug 5 22:00:42.416090 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:00:42.427021 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:00:42.438185 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Aug 5 22:00:42.464390 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:00:42.473112 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:00:42.511455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:00:42.523021 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:00:42.536217 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:00:42.537393 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:00:42.540948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:00:42.543117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:00:42.551033 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:00:42.561877 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 5 22:00:42.577809 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 22:00:42.577965 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:00:42.577978 kernel: GPT:9289727 != 19775487 Aug 5 22:00:42.577988 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:00:42.578004 kernel: GPT:9289727 != 19775487 Aug 5 22:00:42.578013 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:00:42.578025 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:00:42.562918 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:00:42.572362 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:00:42.572474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:00:42.577039 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:00:42.578378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:00:42.578503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:00:42.583307 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:00:42.589118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:00:42.603824 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (521) Aug 5 22:00:42.603886 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (504) Aug 5 22:00:42.605218 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:00:42.606757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:00:42.616717 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:00:42.623746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:00:42.627600 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:00:42.628773 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:00:42.648002 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:00:42.649631 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:00:42.655675 disk-uuid[550]: Primary Header is updated. Aug 5 22:00:42.655675 disk-uuid[550]: Secondary Entries is updated. Aug 5 22:00:42.655675 disk-uuid[550]: Secondary Header is updated. Aug 5 22:00:42.659874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:00:42.668988 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:00:43.671947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:00:43.672629 disk-uuid[552]: The operation has completed successfully. Aug 5 22:00:43.696498 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:00:43.696581 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:00:43.721027 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:00:43.724680 sh[572]: Success Aug 5 22:00:43.741000 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 22:00:43.768230 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:00:43.776110 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:00:43.778679 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:00:43.787333 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 22:00:43.787367 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:00:43.788446 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:00:43.788462 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:00:43.789886 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:00:43.793115 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:00:43.794502 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:00:43.811022 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:00:43.812661 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:00:43.841632 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 22:00:43.841670 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:00:43.841681 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:00:43.844883 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:00:43.851940 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:00:43.853877 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 22:00:43.859598 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:00:43.866021 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:00:43.922978 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:00:43.942048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:00:43.983532 systemd-networkd[761]: lo: Link UP Aug 5 22:00:43.983541 systemd-networkd[761]: lo: Gained carrier Aug 5 22:00:43.984483 systemd-networkd[761]: Enumeration completed Aug 5 22:00:43.984580 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:00:43.985388 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:00:43.985391 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:00:43.986069 systemd[1]: Reached target network.target - Network. Aug 5 22:00:43.986651 systemd-networkd[761]: eth0: Link UP Aug 5 22:00:43.986654 systemd-networkd[761]: eth0: Gained carrier Aug 5 22:00:43.986661 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:00:44.008174 ignition[681]: Ignition 2.19.0 Aug 5 22:00:44.008184 ignition[681]: Stage: fetch-offline Aug 5 22:00:44.008219 ignition[681]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:00:44.008228 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:00:44.010262 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:00:44.008322 ignition[681]: parsed url from cmdline: "" Aug 5 22:00:44.008325 ignition[681]: no config URL provided Aug 5 22:00:44.008330 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:00:44.008337 ignition[681]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:00:44.008359 ignition[681]: op(1): [started] loading QEMU firmware config module Aug 5 22:00:44.008364 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 22:00:44.023493 ignition[681]: op(1): [finished] loading QEMU firmware config module Aug 5 22:00:44.059995 ignition[681]: parsing config with SHA512: e274c0aa52778ba40e1302734807da7b76061eb39ecf10380869916b32abcd83259e679058e1689a74fe04ad29f50280c4b47989c5b2d0e75a76d40e05f310d4 Aug 5 22:00:44.065466 unknown[681]: fetched base config from "system" Aug 5 22:00:44.066233 unknown[681]: fetched user config from "qemu" Aug 5 22:00:44.066742 ignition[681]: fetch-offline: fetch-offline passed Aug 5 22:00:44.068410 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:00:44.066815 ignition[681]: Ignition finished successfully Aug 5 22:00:44.069894 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 22:00:44.080940 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:00:44.091999 ignition[772]: Ignition 2.19.0 Aug 5 22:00:44.092009 ignition[772]: Stage: kargs Aug 5 22:00:44.092161 ignition[772]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:00:44.092170 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:00:44.094957 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:00:44.093077 ignition[772]: kargs: kargs passed Aug 5 22:00:44.093118 ignition[772]: Ignition finished successfully Aug 5 22:00:44.109026 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:00:44.119744 ignition[781]: Ignition 2.19.0 Aug 5 22:00:44.119754 ignition[781]: Stage: disks Aug 5 22:00:44.119975 ignition[781]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:00:44.119985 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:00:44.122469 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:00:44.120950 ignition[781]: disks: disks passed Aug 5 22:00:44.124035 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:00:44.120999 ignition[781]: Ignition finished successfully Aug 5 22:00:44.125569 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:00:44.127043 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:00:44.128978 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:00:44.130488 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:00:44.142013 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:00:44.153833 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:00:44.157542 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:00:44.170989 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:00:44.213875 kernel: EXT4-fs (vda9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 22:00:44.214127 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:00:44.215304 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:00:44.229953 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:00:44.231565 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:00:44.232739 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:00:44.232818 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:00:44.232906 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:00:44.241344 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Aug 5 22:00:44.241364 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 22:00:44.241382 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:00:44.239254 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:00:44.245306 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:00:44.245325 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:00:44.244714 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:00:44.248663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:00:44.293540 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:00:44.298020 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:00:44.302291 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:00:44.306952 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:00:44.383154 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:00:44.400996 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:00:44.403440 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:00:44.408880 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 22:00:44.424178 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:00:44.428546 ignition[914]: INFO : Ignition 2.19.0 Aug 5 22:00:44.428546 ignition[914]: INFO : Stage: mount Aug 5 22:00:44.430153 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:00:44.430153 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:00:44.433029 ignition[914]: INFO : mount: mount passed Aug 5 22:00:44.433029 ignition[914]: INFO : Ignition finished successfully Aug 5 22:00:44.432283 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:00:44.444985 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:00:44.786224 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:00:44.796060 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:00:44.802578 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Aug 5 22:00:44.802610 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 22:00:44.802621 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:00:44.804225 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:00:44.806880 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:00:44.807328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:00:44.828739 ignition[944]: INFO : Ignition 2.19.0 Aug 5 22:00:44.828739 ignition[944]: INFO : Stage: files Aug 5 22:00:44.828739 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:00:44.828739 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:00:44.832625 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:00:44.832625 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:00:44.832625 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:00:44.832625 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:00:44.832625 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:00:44.838715 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:00:44.838715 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 22:00:44.838715 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 22:00:44.838715 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 22:00:44.838715 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 22:00:44.832794 unknown[944]: wrote ssh authorized keys file for user: core Aug 5 22:00:44.869371 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 5 22:00:44.907915 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 22:00:44.907915 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 5 22:00:44.911307 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 5 22:00:45.236054 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 5 22:00:45.335927 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 5 22:00:45.335927 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 22:00:45.339543 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Aug 5 22:00:45.504592 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Aug 5 22:00:45.779456 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 22:00:45.779456 ignition[944]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Aug 5 22:00:45.783104 ignition[944]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 22:00:45.814010 systemd-networkd[761]: eth0: Gained IPv6LL Aug 5 22:00:45.824428 ignition[944]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:00:45.828001 ignition[944]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:00:45.830809 ignition[944]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 22:00:45.830809 ignition[944]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:00:45.830809 ignition[944]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:00:45.830809 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:00:45.830809 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:00:45.830809 ignition[944]: INFO : files: files passed Aug 5 22:00:45.830809 ignition[944]: INFO : Ignition finished successfully Aug 5 22:00:45.831485 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:00:45.839632 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:00:45.841309 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:00:45.843092 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:00:45.843195 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:00:45.848913 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 22:00:45.850924 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:00:45.850924 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:00:45.854124 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:00:45.855086 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:00:45.857057 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:00:45.859480 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:00:45.881070 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:00:45.881174 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:00:45.883268 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:00:45.884996 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:00:45.886768 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:00:45.887506 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:00:45.902422 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:00:45.916058 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:00:45.923656 systemd[1]: Stopped target network.target - Network. Aug 5 22:00:45.924673 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:00:45.926428 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:00:45.928394 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:00:45.930157 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:00:45.930264 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:00:45.932717 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:00:45.934675 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:00:45.936198 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:00:45.937882 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:00:45.939816 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:00:45.941730 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:00:45.943561 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:00:45.945476 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:00:45.947538 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:00:45.949295 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:00:45.950743 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:00:45.950886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:00:45.953109 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:00:45.954198 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:00:45.955981 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:00:45.956939 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:00:45.959022 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:00:45.959130 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:00:45.961838 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:00:45.961952 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:00:45.963983 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:00:45.965457 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:00:45.966891 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:00:45.968479 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:00:45.969834 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:00:45.971530 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:00:45.971612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:00:45.973580 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:00:45.973666 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:00:45.975214 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:00:45.975318 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:00:45.976975 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:00:45.977071 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:00:45.988021 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:00:45.989584 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:00:45.990658 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:00:45.992485 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:00:45.994335 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:00:45.994468 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:00:45.996474 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:00:45.996568 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:00:46.002721 ignition[999]: INFO : Ignition 2.19.0 Aug 5 22:00:46.002721 ignition[999]: INFO : Stage: umount Aug 5 22:00:46.004336 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:00:46.004336 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:00:46.003263 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:00:46.009956 ignition[999]: INFO : umount: umount passed Aug 5 22:00:46.009956 ignition[999]: INFO : Ignition finished successfully Aug 5 22:00:46.003902 systemd-networkd[761]: eth0: DHCPv6 lease lost Aug 5 22:00:46.003970 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:00:46.009483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:00:46.010118 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:00:46.010218 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:00:46.012066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:00:46.012149 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:00:46.013789 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:00:46.013948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:00:46.016810 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:00:46.016961 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:00:46.019320 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:00:46.019430 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:00:46.020870 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:00:46.020923 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:00:46.022930 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:00:46.022975 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:00:46.024776 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:00:46.024837 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:00:46.026673 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:00:46.026716 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:00:46.028322 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:00:46.028363 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:00:46.037965 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:00:46.039062 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:00:46.039117 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:00:46.041048 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:00:46.041091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:00:46.042790 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:00:46.042843 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:00:46.044699 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:00:46.044739 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:00:46.047093 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:00:46.056558 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:00:46.056666 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:00:46.064666 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:00:46.064819 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:00:46.067180 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:00:46.067217 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:00:46.069002 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:00:46.069033 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:00:46.070831 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:00:46.070893 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:00:46.073620 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:00:46.073666 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:00:46.076571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:00:46.076614 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:00:46.097011 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:00:46.098034 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:00:46.098086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:00:46.100187 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:00:46.100229 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:00:46.102163 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:00:46.102206 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:00:46.104310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:00:46.104350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:00:46.106482 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:00:46.106574 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:00:46.108834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:00:46.111014 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:00:46.122090 systemd[1]: Switching root. Aug 5 22:00:46.151738 systemd-journald[237]: Journal stopped Aug 5 22:00:46.913972 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Aug 5 22:00:46.914029 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:00:46.914042 kernel: SELinux: policy capability open_perms=1 Aug 5 22:00:46.914052 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:00:46.914063 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:00:46.914075 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:00:46.914089 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:00:46.914104 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:00:46.914114 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:00:46.914129 kernel: audit: type=1403 audit(1722895246.359:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:00:46.914140 systemd[1]: Successfully loaded SELinux policy in 36.039ms. Aug 5 22:00:46.914157 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.157ms. Aug 5 22:00:46.914169 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:00:46.914180 systemd[1]: Detected virtualization kvm. Aug 5 22:00:46.914190 systemd[1]: Detected architecture arm64. Aug 5 22:00:46.914203 systemd[1]: Detected first boot. Aug 5 22:00:46.914213 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:00:46.914224 zram_generator::config[1063]: No configuration found. Aug 5 22:00:46.914235 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:00:46.914246 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:00:46.914256 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:00:46.914273 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:00:46.914284 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:00:46.914295 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:00:46.914306 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:00:46.914318 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:00:46.914330 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:00:46.914341 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:00:46.914351 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:00:46.914362 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:00:46.914373 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:00:46.914384 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:00:46.914396 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:00:46.914407 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:00:46.914417 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:00:46.914428 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 22:00:46.914439 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:00:46.914449 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:00:46.914460 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:00:46.914471 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:00:46.914482 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:00:46.914494 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:00:46.914505 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:00:46.914519 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:00:46.914530 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:00:46.914540 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:00:46.914551 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:00:46.914569 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:00:46.914580 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:00:46.914591 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:00:46.914603 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:00:46.914614 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:00:46.914625 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:00:46.914636 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:00:46.914647 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:00:46.914657 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:00:46.914668 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:00:46.914679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:00:46.914691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:00:46.914702 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:00:46.914713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:00:46.914723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:00:46.914734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:00:46.914744 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:00:46.914755 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:00:46.914765 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:00:46.914776 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 5 22:00:46.914789 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 5 22:00:46.914805 kernel: loop: module loaded Aug 5 22:00:46.914815 kernel: fuse: init (API version 7.39) Aug 5 22:00:46.914825 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:00:46.914835 kernel: ACPI: bus type drm_connector registered Aug 5 22:00:46.914845 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:00:46.914866 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:00:46.914878 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:00:46.914888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:00:46.914917 systemd-journald[1141]: Collecting audit messages is disabled. Aug 5 22:00:46.914938 systemd-journald[1141]: Journal started Aug 5 22:00:46.914960 systemd-journald[1141]: Runtime Journal (/run/log/journal/fcc1a3352e4d44a79d7e79abadc7c1b0) is 5.9M, max 47.3M, 41.4M free. Aug 5 22:00:46.917208 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:00:46.918147 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:00:46.919162 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:00:46.920288 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:00:46.921278 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:00:46.922378 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:00:46.923468 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:00:46.924732 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:00:46.926093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:00:46.927445 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:00:46.927602 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:00:46.929000 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:00:46.929150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:00:46.930393 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:00:46.930543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:00:46.931685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:00:46.931866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:00:46.933260 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:00:46.933419 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:00:46.934639 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:00:46.934850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:00:46.936351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:00:46.937873 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:00:46.939190 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:00:46.950641 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:00:46.962978 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:00:46.964888 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:00:46.965878 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:00:46.967541 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:00:46.969497 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:00:46.970527 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:00:46.974018 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:00:46.975154 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:00:46.978715 systemd-journald[1141]: Time spent on flushing to /var/log/journal/fcc1a3352e4d44a79d7e79abadc7c1b0 is 17.518ms for 846 entries. Aug 5 22:00:46.978715 systemd-journald[1141]: System Journal (/var/log/journal/fcc1a3352e4d44a79d7e79abadc7c1b0) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:00:47.005655 systemd-journald[1141]: Received client request to flush runtime journal. Aug 5 22:00:46.979019 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:00:46.982194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:00:46.984808 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:00:46.986372 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:00:46.987553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:00:46.989184 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:00:46.995305 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:00:46.999010 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:00:47.009052 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:00:47.013567 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:00:47.017341 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:00:47.024274 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Aug 5 22:00:47.024290 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Aug 5 22:00:47.032276 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:00:47.042140 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:00:47.060447 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:00:47.068987 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:00:47.082178 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Aug 5 22:00:47.082449 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Aug 5 22:00:47.086977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:00:47.399398 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:00:47.411081 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:00:47.434205 systemd-udevd[1227]: Using default interface naming scheme 'v255'. Aug 5 22:00:47.453993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:00:47.464057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:00:47.485021 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:00:47.494777 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Aug 5 22:00:47.499234 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1231) Aug 5 22:00:47.499327 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1246) Aug 5 22:00:47.533287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:00:47.546412 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:00:47.597117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:00:47.609395 systemd-networkd[1234]: lo: Link UP Aug 5 22:00:47.609407 systemd-networkd[1234]: lo: Gained carrier Aug 5 22:00:47.610379 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:00:47.611890 systemd-networkd[1234]: Enumeration completed Aug 5 22:00:47.612457 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:00:47.616197 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:00:47.616207 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:00:47.618031 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:00:47.619186 systemd-networkd[1234]: eth0: Link UP Aug 5 22:00:47.619195 systemd-networkd[1234]: eth0: Gained carrier Aug 5 22:00:47.619208 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:00:47.621694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:00:47.639400 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:00:47.646938 systemd-networkd[1234]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:00:47.659980 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:00:47.672385 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:00:47.673896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:00:47.683056 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:00:47.687286 lvm[1273]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:00:47.715719 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:00:47.717162 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:00:47.718532 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:00:47.718570 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:00:47.719610 systemd[1]: Reached target machines.target - Containers. Aug 5 22:00:47.721556 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:00:47.740050 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:00:47.742421 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:00:47.743548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:00:47.744524 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:00:47.747889 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:00:47.751686 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:00:47.757628 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:00:47.763870 kernel: loop0: detected capacity change from 0 to 193208 Aug 5 22:00:47.763961 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:00:47.765074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:00:47.777296 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:00:47.777995 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:00:47.781900 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:00:47.835882 kernel: loop1: detected capacity change from 0 to 59688 Aug 5 22:00:47.891873 kernel: loop2: detected capacity change from 0 to 113712 Aug 5 22:00:47.935898 kernel: loop3: detected capacity change from 0 to 193208 Aug 5 22:00:47.942905 kernel: loop4: detected capacity change from 0 to 59688 Aug 5 22:00:47.948868 kernel: loop5: detected capacity change from 0 to 113712 Aug 5 22:00:47.951576 (sd-merge)[1294]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 22:00:47.951992 (sd-merge)[1294]: Merged extensions into '/usr'. Aug 5 22:00:47.956622 systemd[1]: Reloading requested from client PID 1281 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:00:47.956637 systemd[1]: Reloading... Aug 5 22:00:48.000446 zram_generator::config[1321]: No configuration found. Aug 5 22:00:48.037553 ldconfig[1277]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:00:48.101119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:00:48.145517 systemd[1]: Reloading finished in 188 ms. Aug 5 22:00:48.160609 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:00:48.162072 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:00:48.179064 systemd[1]: Starting ensure-sysext.service... Aug 5 22:00:48.180909 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:00:48.186088 systemd[1]: Reloading requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:00:48.186103 systemd[1]: Reloading... Aug 5 22:00:48.196152 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:00:48.196394 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:00:48.197094 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:00:48.197313 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Aug 5 22:00:48.197362 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Aug 5 22:00:48.199348 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:00:48.199361 systemd-tmpfiles[1372]: Skipping /boot Aug 5 22:00:48.205259 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:00:48.205272 systemd-tmpfiles[1372]: Skipping /boot Aug 5 22:00:48.227920 zram_generator::config[1399]: No configuration found. Aug 5 22:00:48.314907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:00:48.358384 systemd[1]: Reloading finished in 171 ms. Aug 5 22:00:48.371432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:00:48.386618 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:00:48.388983 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:00:48.391066 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:00:48.394008 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:00:48.399031 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:00:48.403596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:00:48.404779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:00:48.407365 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:00:48.411842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:00:48.416007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:00:48.416773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:00:48.416983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:00:48.425149 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:00:48.425393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:00:48.430495 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:00:48.432701 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:00:48.434846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:00:48.435128 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:00:48.440705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:00:48.449168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:00:48.452103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:00:48.454523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:00:48.458455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:00:48.461915 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:00:48.464180 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:00:48.466031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:00:48.466258 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:00:48.467647 augenrules[1478]: No rules Aug 5 22:00:48.467958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:00:48.468097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:00:48.469868 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:00:48.472044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:00:48.473704 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:00:48.475305 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:00:48.483441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:00:48.483512 systemd-resolved[1446]: Positive Trust Anchors: Aug 5 22:00:48.485280 systemd-resolved[1446]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:00:48.485315 systemd-resolved[1446]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:00:48.490832 systemd-resolved[1446]: Defaulting to hostname 'linux'. Aug 5 22:00:48.494140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:00:48.496254 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:00:48.498334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:00:48.500546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:00:48.501703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:00:48.501869 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:00:48.502496 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:00:48.504437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:00:48.504582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:00:48.506341 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:00:48.506494 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:00:48.508081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:00:48.508369 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:00:48.510130 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:00:48.510338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:00:48.513452 systemd[1]: Finished ensure-sysext.service. Aug 5 22:00:48.517896 systemd[1]: Reached target network.target - Network. Aug 5 22:00:48.518823 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:00:48.520148 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:00:48.520209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:00:48.530011 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:00:48.572738 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:00:48.573456 systemd-timesyncd[1513]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 22:00:48.573505 systemd-timesyncd[1513]: Initial clock synchronization to Mon 2024-08-05 22:00:48.186122 UTC. Aug 5 22:00:48.574307 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:00:48.575356 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:00:48.576544 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:00:48.577719 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:00:48.578933 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:00:48.578964 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:00:48.579788 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:00:48.581003 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:00:48.582144 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:00:48.583301 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:00:48.584630 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:00:48.587117 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:00:48.589402 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:00:48.594772 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:00:48.595748 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:00:48.596700 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:00:48.597728 systemd[1]: System is tainted: cgroupsv1 Aug 5 22:00:48.597778 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:00:48.597808 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:00:48.598964 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:00:48.601009 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:00:48.603136 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:00:48.606922 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:00:48.609941 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:00:48.611143 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:00:48.617075 jq[1519]: false Aug 5 22:00:48.617450 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:00:48.620867 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:00:48.627020 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:00:48.629883 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:00:48.632111 extend-filesystems[1521]: Found loop3 Aug 5 22:00:48.632111 extend-filesystems[1521]: Found loop4 Aug 5 22:00:48.633673 extend-filesystems[1521]: Found loop5 Aug 5 22:00:48.633673 extend-filesystems[1521]: Found vda Aug 5 22:00:48.633673 extend-filesystems[1521]: Found vda1 Aug 5 22:00:48.633673 extend-filesystems[1521]: Found vda2 Aug 5 22:00:48.633673 extend-filesystems[1521]: Found vda3 Aug 5 22:00:48.633673 extend-filesystems[1521]: Found usr Aug 5 22:00:48.633673 extend-filesystems[1521]: Found vda4 Aug 5 22:00:48.633673 extend-filesystems[1521]: Found vda6 Aug 5 22:00:48.633673 extend-filesystems[1521]: Found vda7 Aug 5 22:00:48.633673 extend-filesystems[1521]: Found vda9 Aug 5 22:00:48.633673 extend-filesystems[1521]: Checking size of /dev/vda9 Aug 5 22:00:48.645085 dbus-daemon[1518]: [system] SELinux support is enabled Aug 5 22:00:48.635773 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:00:48.639004 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:00:48.642669 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:00:48.651123 jq[1542]: true Aug 5 22:00:48.650509 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:00:48.658074 extend-filesystems[1521]: Resized partition /dev/vda9 Aug 5 22:00:48.659800 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:00:48.660258 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:00:48.660423 extend-filesystems[1548]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:00:48.660514 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:00:48.660703 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:00:48.664476 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:00:48.664689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:00:48.664970 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 22:00:48.672916 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1243) Aug 5 22:00:48.684134 jq[1551]: true Aug 5 22:00:48.685632 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:00:48.696004 tar[1550]: linux-arm64/helm Aug 5 22:00:48.693592 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:00:48.693618 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:00:48.705418 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 22:00:48.704984 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:00:48.705006 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:00:48.723629 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:00:48.723629 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:00:48.723629 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 22:00:48.729422 extend-filesystems[1521]: Resized filesystem in /dev/vda9 Aug 5 22:00:48.731973 update_engine[1537]: I0805 22:00:48.728693 1537 main.cc:92] Flatcar Update Engine starting Aug 5 22:00:48.726182 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:00:48.726409 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:00:48.733638 update_engine[1537]: I0805 22:00:48.733475 1537 update_check_scheduler.cc:74] Next update check in 10m6s Aug 5 22:00:48.734991 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:00:48.736570 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:00:48.750116 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:00:48.759593 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 22:00:48.759968 systemd-logind[1535]: New seat seat0. Aug 5 22:00:48.761941 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:00:48.773616 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:00:48.775945 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:00:48.781230 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:00:48.842374 locksmithd[1579]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:00:48.917965 containerd[1552]: time="2024-08-05T22:00:48.917878680Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 22:00:48.952563 containerd[1552]: time="2024-08-05T22:00:48.952399200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:00:48.952563 containerd[1552]: time="2024-08-05T22:00:48.952494880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954055 containerd[1552]: time="2024-08-05T22:00:48.953993040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954055 containerd[1552]: time="2024-08-05T22:00:48.954036920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954330 containerd[1552]: time="2024-08-05T22:00:48.954293960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954330 containerd[1552]: time="2024-08-05T22:00:48.954318560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:00:48.954417 containerd[1552]: time="2024-08-05T22:00:48.954397720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954489 containerd[1552]: time="2024-08-05T22:00:48.954472920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954510 containerd[1552]: time="2024-08-05T22:00:48.954489000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954559 containerd[1552]: time="2024-08-05T22:00:48.954546440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954768 containerd[1552]: time="2024-08-05T22:00:48.954741720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954798 containerd[1552]: time="2024-08-05T22:00:48.954766720Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:00:48.954798 containerd[1552]: time="2024-08-05T22:00:48.954776880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954951 containerd[1552]: time="2024-08-05T22:00:48.954931200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:00:48.954987 containerd[1552]: time="2024-08-05T22:00:48.954950440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:00:48.955028 containerd[1552]: time="2024-08-05T22:00:48.955011640Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:00:48.955052 containerd[1552]: time="2024-08-05T22:00:48.955029600Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:00:48.958761 containerd[1552]: time="2024-08-05T22:00:48.958727840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:00:48.958761 containerd[1552]: time="2024-08-05T22:00:48.958763760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:00:48.958842 containerd[1552]: time="2024-08-05T22:00:48.958776360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:00:48.958842 containerd[1552]: time="2024-08-05T22:00:48.958817160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:00:48.958842 containerd[1552]: time="2024-08-05T22:00:48.958840200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:00:48.958923 containerd[1552]: time="2024-08-05T22:00:48.958871040Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:00:48.958923 containerd[1552]: time="2024-08-05T22:00:48.958886160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:00:48.959029 containerd[1552]: time="2024-08-05T22:00:48.959007280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:00:48.959068 containerd[1552]: time="2024-08-05T22:00:48.959030320Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:00:48.959068 containerd[1552]: time="2024-08-05T22:00:48.959043240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:00:48.959068 containerd[1552]: time="2024-08-05T22:00:48.959062480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:00:48.959119 containerd[1552]: time="2024-08-05T22:00:48.959076640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:00:48.959119 containerd[1552]: time="2024-08-05T22:00:48.959094120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:00:48.959119 containerd[1552]: time="2024-08-05T22:00:48.959109640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:00:48.959167 containerd[1552]: time="2024-08-05T22:00:48.959122400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:00:48.959167 containerd[1552]: time="2024-08-05T22:00:48.959138200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:00:48.959167 containerd[1552]: time="2024-08-05T22:00:48.959152040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:00:48.959167 containerd[1552]: time="2024-08-05T22:00:48.959163720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:00:48.959233 containerd[1552]: time="2024-08-05T22:00:48.959175760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:00:48.959285 containerd[1552]: time="2024-08-05T22:00:48.959269480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:00:48.960031 containerd[1552]: time="2024-08-05T22:00:48.959943560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:00:48.960072 containerd[1552]: time="2024-08-05T22:00:48.960048680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960125 containerd[1552]: time="2024-08-05T22:00:48.960107280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:00:48.960150 containerd[1552]: time="2024-08-05T22:00:48.960142400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:00:48.960376 containerd[1552]: time="2024-08-05T22:00:48.960358880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960400 containerd[1552]: time="2024-08-05T22:00:48.960383440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960516 containerd[1552]: time="2024-08-05T22:00:48.960499960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960575 containerd[1552]: time="2024-08-05T22:00:48.960522040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960596 containerd[1552]: time="2024-08-05T22:00:48.960582320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960614 containerd[1552]: time="2024-08-05T22:00:48.960599000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960632 containerd[1552]: time="2024-08-05T22:00:48.960611680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960632 containerd[1552]: time="2024-08-05T22:00:48.960623800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.960696 containerd[1552]: time="2024-08-05T22:00:48.960680640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:00:48.961079 containerd[1552]: time="2024-08-05T22:00:48.961003720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.961115 containerd[1552]: time="2024-08-05T22:00:48.961088520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.961177 containerd[1552]: time="2024-08-05T22:00:48.961159520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.961197 containerd[1552]: time="2024-08-05T22:00:48.961184800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.961216 containerd[1552]: time="2024-08-05T22:00:48.961198800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.961216 containerd[1552]: time="2024-08-05T22:00:48.961213200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.961258 containerd[1552]: time="2024-08-05T22:00:48.961225240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.961291 containerd[1552]: time="2024-08-05T22:00:48.961237400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:00:48.962033 containerd[1552]: time="2024-08-05T22:00:48.961913360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:00:48.962033 containerd[1552]: time="2024-08-05T22:00:48.962034240Z" level=info msg="Connect containerd service" Aug 5 22:00:48.962171 containerd[1552]: time="2024-08-05T22:00:48.962074040Z" level=info msg="using legacy CRI server" Aug 5 22:00:48.962171 containerd[1552]: time="2024-08-05T22:00:48.962084120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:00:48.962352 containerd[1552]: time="2024-08-05T22:00:48.962312960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:00:48.963524 containerd[1552]: time="2024-08-05T22:00:48.963485560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:00:48.963700 containerd[1552]: time="2024-08-05T22:00:48.963667320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:00:48.963943 containerd[1552]: time="2024-08-05T22:00:48.963697880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:00:48.963999 containerd[1552]: time="2024-08-05T22:00:48.963980640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:00:48.964034 containerd[1552]: time="2024-08-05T22:00:48.964005800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:00:48.964543 containerd[1552]: time="2024-08-05T22:00:48.964525680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:00:48.964580 containerd[1552]: time="2024-08-05T22:00:48.964573080Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:00:48.966011 containerd[1552]: time="2024-08-05T22:00:48.964727640Z" level=info msg="Start subscribing containerd event" Aug 5 22:00:48.966011 containerd[1552]: time="2024-08-05T22:00:48.965165800Z" level=info msg="Start recovering state" Aug 5 22:00:48.966011 containerd[1552]: time="2024-08-05T22:00:48.965237120Z" level=info msg="Start event monitor" Aug 5 22:00:48.966011 containerd[1552]: time="2024-08-05T22:00:48.965248160Z" level=info msg="Start snapshots syncer" Aug 5 22:00:48.966011 containerd[1552]: time="2024-08-05T22:00:48.965257200Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:00:48.966011 containerd[1552]: time="2024-08-05T22:00:48.965311440Z" level=info msg="Start streaming server" Aug 5 22:00:48.965628 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:00:48.967511 containerd[1552]: time="2024-08-05T22:00:48.967480120Z" level=info msg="containerd successfully booted in 0.050917s" Aug 5 22:00:49.050007 tar[1550]: linux-arm64/LICENSE Aug 5 22:00:49.050007 tar[1550]: linux-arm64/README.md Aug 5 22:00:49.059122 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:00:49.108515 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:00:49.127182 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:00:49.136076 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:00:49.141416 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:00:49.141646 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:00:49.142985 systemd-networkd[1234]: eth0: Gained IPv6LL Aug 5 22:00:49.144566 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:00:49.146724 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:00:49.148615 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:00:49.151031 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 22:00:49.153270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:00:49.158073 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:00:49.160719 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:00:49.172849 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:00:49.178108 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 22:00:49.179394 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:00:49.181135 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 22:00:49.181338 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 22:00:49.183644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:00:49.186405 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:00:49.625975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:00:49.627422 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:00:49.629209 systemd[1]: Startup finished in 5.210s (kernel) + 3.307s (userspace) = 8.518s. Aug 5 22:00:49.630569 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:00:50.097695 kubelet[1660]: E0805 22:00:50.097550 1660 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:00:50.100041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:00:50.100224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:00:54.447438 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:00:54.464163 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:32830.service - OpenSSH per-connection server daemon (10.0.0.1:32830). Aug 5 22:00:54.520678 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 32830 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:00:54.522532 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:00:54.532058 systemd-logind[1535]: New session 1 of user core. Aug 5 22:00:54.533087 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:00:54.545097 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:00:54.556755 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:00:54.559823 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:00:54.567403 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:00:54.636781 systemd[1680]: Queued start job for default target default.target. Aug 5 22:00:54.637156 systemd[1680]: Created slice app.slice - User Application Slice. Aug 5 22:00:54.637192 systemd[1680]: Reached target paths.target - Paths. Aug 5 22:00:54.637204 systemd[1680]: Reached target timers.target - Timers. Aug 5 22:00:54.646936 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:00:54.652844 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:00:54.652909 systemd[1680]: Reached target sockets.target - Sockets. Aug 5 22:00:54.652921 systemd[1680]: Reached target basic.target - Basic System. Aug 5 22:00:54.652956 systemd[1680]: Reached target default.target - Main User Target. Aug 5 22:00:54.652978 systemd[1680]: Startup finished in 80ms. Aug 5 22:00:54.653256 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:00:54.654685 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:00:54.710118 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:32846.service - OpenSSH per-connection server daemon (10.0.0.1:32846). Aug 5 22:00:54.745134 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 32846 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:00:54.746605 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:00:54.750478 systemd-logind[1535]: New session 2 of user core. Aug 5 22:00:54.770191 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:00:54.823520 sshd[1692]: pam_unix(sshd:session): session closed for user core Aug 5 22:00:54.826442 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:32846.service: Deactivated successfully. Aug 5 22:00:54.828341 systemd-logind[1535]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:00:54.846182 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:32856.service - OpenSSH per-connection server daemon (10.0.0.1:32856). Aug 5 22:00:54.846559 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:00:54.847565 systemd-logind[1535]: Removed session 2. Aug 5 22:00:54.875706 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 32856 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:00:54.876978 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:00:54.881475 systemd-logind[1535]: New session 3 of user core. Aug 5 22:00:54.895127 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:00:54.943406 sshd[1700]: pam_unix(sshd:session): session closed for user core Aug 5 22:00:54.955122 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:32858.service - OpenSSH per-connection server daemon (10.0.0.1:32858). Aug 5 22:00:54.955492 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:32856.service: Deactivated successfully. Aug 5 22:00:54.957241 systemd-logind[1535]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:00:54.957726 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:00:54.959054 systemd-logind[1535]: Removed session 3. Aug 5 22:00:54.984483 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 32858 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:00:54.985658 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:00:54.989541 systemd-logind[1535]: New session 4 of user core. Aug 5 22:00:55.001090 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:00:55.051088 sshd[1705]: pam_unix(sshd:session): session closed for user core Aug 5 22:00:55.064075 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:32860.service - OpenSSH per-connection server daemon (10.0.0.1:32860). Aug 5 22:00:55.064436 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:32858.service: Deactivated successfully. Aug 5 22:00:55.066101 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:00:55.066623 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:00:55.067759 systemd-logind[1535]: Removed session 4. Aug 5 22:00:55.092761 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 32860 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:00:55.093933 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:00:55.097423 systemd-logind[1535]: New session 5 of user core. Aug 5 22:00:55.112139 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:00:55.181178 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:00:55.181430 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:00:55.195615 sudo[1720]: pam_unix(sudo:session): session closed for user root Aug 5 22:00:55.197508 sshd[1713]: pam_unix(sshd:session): session closed for user core Aug 5 22:00:55.209123 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:32874.service - OpenSSH per-connection server daemon (10.0.0.1:32874). Aug 5 22:00:55.209498 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:32860.service: Deactivated successfully. Aug 5 22:00:55.211917 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:00:55.212555 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:00:55.213484 systemd-logind[1535]: Removed session 5. Aug 5 22:00:55.238385 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 32874 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:00:55.239520 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:00:55.243360 systemd-logind[1535]: New session 6 of user core. Aug 5 22:00:55.254129 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:00:55.305273 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:00:55.305820 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:00:55.308718 sudo[1730]: pam_unix(sudo:session): session closed for user root Aug 5 22:00:55.313191 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:00:55.313412 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:00:55.336505 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:00:55.337657 auditctl[1733]: No rules Aug 5 22:00:55.338069 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:00:55.338294 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:00:55.341224 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:00:55.363529 augenrules[1752]: No rules Aug 5 22:00:55.364782 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:00:55.365896 sudo[1729]: pam_unix(sudo:session): session closed for user root Aug 5 22:00:55.367464 sshd[1722]: pam_unix(sshd:session): session closed for user core Aug 5 22:00:55.373049 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:32888.service - OpenSSH per-connection server daemon (10.0.0.1:32888). Aug 5 22:00:55.373429 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:32874.service: Deactivated successfully. Aug 5 22:00:55.374900 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:00:55.375476 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:00:55.376594 systemd-logind[1535]: Removed session 6. Aug 5 22:00:55.400845 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 32888 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:00:55.402033 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:00:55.409470 systemd-logind[1535]: New session 7 of user core. Aug 5 22:00:55.426156 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:00:55.475288 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:00:55.475516 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:00:55.588094 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:00:55.588341 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:00:55.843683 dockerd[1778]: time="2024-08-05T22:00:55.843543254Z" level=info msg="Starting up" Aug 5 22:00:56.018469 dockerd[1778]: time="2024-08-05T22:00:56.018318536Z" level=info msg="Loading containers: start." Aug 5 22:00:56.098891 kernel: Initializing XFRM netlink socket Aug 5 22:00:56.165950 systemd-networkd[1234]: docker0: Link UP Aug 5 22:00:56.206322 dockerd[1778]: time="2024-08-05T22:00:56.206265572Z" level=info msg="Loading containers: done." Aug 5 22:00:56.258678 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1292485998-merged.mount: Deactivated successfully. Aug 5 22:00:56.260216 dockerd[1778]: time="2024-08-05T22:00:56.259989345Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:00:56.260216 dockerd[1778]: time="2024-08-05T22:00:56.260173575Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:00:56.260318 dockerd[1778]: time="2024-08-05T22:00:56.260278973Z" level=info msg="Daemon has completed initialization" Aug 5 22:00:56.287752 dockerd[1778]: time="2024-08-05T22:00:56.287702693Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:00:56.288456 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:00:56.862708 containerd[1552]: time="2024-08-05T22:00:56.862664313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 22:00:57.525943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891573809.mount: Deactivated successfully. Aug 5 22:01:00.086032 containerd[1552]: time="2024-08-05T22:01:00.085969645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:00.086429 containerd[1552]: time="2024-08-05T22:01:00.086394617Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=31601518" Aug 5 22:01:00.087248 containerd[1552]: time="2024-08-05T22:01:00.087220036Z" level=info msg="ImageCreate event name:\"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:00.090950 containerd[1552]: time="2024-08-05T22:01:00.090897081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:00.092487 containerd[1552]: time="2024-08-05T22:01:00.092280967Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"31598316\" in 3.229158616s" Aug 5 22:01:00.092487 containerd[1552]: time="2024-08-05T22:01:00.092317872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\"" Aug 5 22:01:00.110768 containerd[1552]: time="2024-08-05T22:01:00.110707768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 22:01:00.350434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:01:00.367040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:01:00.452794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:01:00.456663 (kubelet)[1988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:01:00.568995 kubelet[1988]: E0805 22:01:00.568918 1988 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:01:00.573277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:01:00.573459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:01:01.925614 containerd[1552]: time="2024-08-05T22:01:01.925558589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:01.926383 containerd[1552]: time="2024-08-05T22:01:01.926345006Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=29018272" Aug 5 22:01:01.927059 containerd[1552]: time="2024-08-05T22:01:01.927032002Z" level=info msg="ImageCreate event name:\"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:01.930053 containerd[1552]: time="2024-08-05T22:01:01.930005010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:01.931124 containerd[1552]: time="2024-08-05T22:01:01.931099949Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"30505537\" in 1.820187989s" Aug 5 22:01:01.931177 containerd[1552]: time="2024-08-05T22:01:01.931130369Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\"" Aug 5 22:01:01.949992 containerd[1552]: time="2024-08-05T22:01:01.949943457Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 22:01:03.130028 containerd[1552]: time="2024-08-05T22:01:03.129959407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:03.130739 containerd[1552]: time="2024-08-05T22:01:03.130704007Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=15534522" Aug 5 22:01:03.131347 containerd[1552]: time="2024-08-05T22:01:03.131313741Z" level=info msg="ImageCreate event name:\"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:03.134113 containerd[1552]: time="2024-08-05T22:01:03.134087599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:03.135267 containerd[1552]: time="2024-08-05T22:01:03.135213769Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"17021805\" in 1.185225334s" Aug 5 22:01:03.135267 containerd[1552]: time="2024-08-05T22:01:03.135248825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\"" Aug 5 22:01:03.155381 containerd[1552]: time="2024-08-05T22:01:03.155346036Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 22:01:04.240947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439167854.mount: Deactivated successfully. Aug 5 22:01:04.558877 containerd[1552]: time="2024-08-05T22:01:04.558731645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:04.559869 containerd[1552]: time="2024-08-05T22:01:04.559802763Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=24977921" Aug 5 22:01:04.560708 containerd[1552]: time="2024-08-05T22:01:04.560666325Z" level=info msg="ImageCreate event name:\"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:04.562874 containerd[1552]: time="2024-08-05T22:01:04.562759404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:04.563529 containerd[1552]: time="2024-08-05T22:01:04.563322621Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"24976938\" in 1.407937035s" Aug 5 22:01:04.563529 containerd[1552]: time="2024-08-05T22:01:04.563358386Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\"" Aug 5 22:01:04.582122 containerd[1552]: time="2024-08-05T22:01:04.582084469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:01:05.017612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466256758.mount: Deactivated successfully. Aug 5 22:01:05.021935 containerd[1552]: time="2024-08-05T22:01:05.021887132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:05.022327 containerd[1552]: time="2024-08-05T22:01:05.022286317Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Aug 5 22:01:05.023245 containerd[1552]: time="2024-08-05T22:01:05.023214263Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:05.025414 containerd[1552]: time="2024-08-05T22:01:05.025382719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:05.027011 containerd[1552]: time="2024-08-05T22:01:05.026973813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 444.743336ms" Aug 5 22:01:05.027044 containerd[1552]: time="2024-08-05T22:01:05.027015295Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 22:01:05.046933 containerd[1552]: time="2024-08-05T22:01:05.046898843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:01:05.605819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888722722.mount: Deactivated successfully. Aug 5 22:01:08.099469 containerd[1552]: time="2024-08-05T22:01:08.099417917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:08.099945 containerd[1552]: time="2024-08-05T22:01:08.099880060Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Aug 5 22:01:08.100823 containerd[1552]: time="2024-08-05T22:01:08.100793550Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:08.104812 containerd[1552]: time="2024-08-05T22:01:08.104773094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:08.105962 containerd[1552]: time="2024-08-05T22:01:08.105921520Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.058983606s" Aug 5 22:01:08.105962 containerd[1552]: time="2024-08-05T22:01:08.105960650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 22:01:08.126881 containerd[1552]: time="2024-08-05T22:01:08.126645628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 22:01:08.698339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529991336.mount: Deactivated successfully. Aug 5 22:01:10.349479 containerd[1552]: time="2024-08-05T22:01:10.349428711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:10.349917 containerd[1552]: time="2024-08-05T22:01:10.349882297Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Aug 5 22:01:10.350872 containerd[1552]: time="2024-08-05T22:01:10.350807737Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:10.352928 containerd[1552]: time="2024-08-05T22:01:10.352896835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:10.353819 containerd[1552]: time="2024-08-05T22:01:10.353739358Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 2.227055309s" Aug 5 22:01:10.353819 containerd[1552]: time="2024-08-05T22:01:10.353770785Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Aug 5 22:01:10.644426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:01:10.653064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:01:10.738946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:01:10.743360 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:01:10.789139 kubelet[2167]: E0805 22:01:10.789084 2167 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:01:10.792170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:01:10.793276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:01:15.077448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:01:15.089075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:01:15.103734 systemd[1]: Reloading requested from client PID 2204 ('systemctl') (unit session-7.scope)... Aug 5 22:01:15.103747 systemd[1]: Reloading... Aug 5 22:01:15.163962 zram_generator::config[2242]: No configuration found. Aug 5 22:01:15.269356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:01:15.318899 systemd[1]: Reloading finished in 214 ms. Aug 5 22:01:15.350779 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:01:15.350839 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:01:15.351101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:01:15.353110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:01:15.451736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:01:15.455436 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:01:15.497394 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:01:15.497394 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:01:15.497394 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:01:15.498244 kubelet[2299]: I0805 22:01:15.498193 2299 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:01:16.028110 kubelet[2299]: I0805 22:01:16.028072 2299 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:01:16.029203 kubelet[2299]: I0805 22:01:16.028260 2299 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:01:16.029203 kubelet[2299]: I0805 22:01:16.028468 2299 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:01:16.084094 kubelet[2299]: I0805 22:01:16.084020 2299 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:01:16.085169 kubelet[2299]: E0805 22:01:16.085153 2299 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.094991 kubelet[2299]: W0805 22:01:16.094951 2299 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 22:01:16.096355 kubelet[2299]: I0805 22:01:16.096334 2299 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:01:16.096679 kubelet[2299]: I0805 22:01:16.096657 2299 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:01:16.096862 kubelet[2299]: I0805 22:01:16.096839 2299 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:01:16.096950 kubelet[2299]: I0805 22:01:16.096882 2299 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:01:16.096950 kubelet[2299]: I0805 22:01:16.096891 2299 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:01:16.097148 kubelet[2299]: I0805 22:01:16.097123 2299 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:01:16.098816 kubelet[2299]: I0805 22:01:16.098793 2299 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:01:16.098849 kubelet[2299]: I0805 22:01:16.098818 2299 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:01:16.098960 kubelet[2299]: I0805 22:01:16.098943 2299 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:01:16.098960 kubelet[2299]: I0805 22:01:16.098958 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:01:16.099544 kubelet[2299]: W0805 22:01:16.099284 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.099544 kubelet[2299]: E0805 22:01:16.099350 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.099544 kubelet[2299]: W0805 22:01:16.099474 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.099544 kubelet[2299]: E0805 22:01:16.099513 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.100460 kubelet[2299]: I0805 22:01:16.100445 2299 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:01:16.105353 kubelet[2299]: W0805 22:01:16.105320 2299 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:01:16.106092 kubelet[2299]: I0805 22:01:16.106077 2299 server.go:1232] "Started kubelet" Aug 5 22:01:16.106631 kubelet[2299]: I0805 22:01:16.106282 2299 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:01:16.106631 kubelet[2299]: I0805 22:01:16.106524 2299 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:01:16.106631 kubelet[2299]: I0805 22:01:16.106541 2299 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:01:16.107363 kubelet[2299]: I0805 22:01:16.107348 2299 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:01:16.108748 kubelet[2299]: I0805 22:01:16.107646 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:01:16.108748 kubelet[2299]: I0805 22:01:16.108076 2299 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:01:16.108748 kubelet[2299]: E0805 22:01:16.108437 2299 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:01:16.108748 kubelet[2299]: I0805 22:01:16.108461 2299 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:01:16.108748 kubelet[2299]: I0805 22:01:16.108626 2299 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:01:16.109159 kubelet[2299]: E0805 22:01:16.109063 2299 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="200ms" Aug 5 22:01:16.109159 kubelet[2299]: W0805 22:01:16.109098 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.109159 kubelet[2299]: E0805 22:01:16.109141 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.110401 kubelet[2299]: E0805 22:01:16.110334 2299 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:01:16.110401 kubelet[2299]: E0805 22:01:16.110366 2299 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:01:16.120165 kubelet[2299]: E0805 22:01:16.120057 2299 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17e8f420157ff3bf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 1, 16, 106052543, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 1, 16, 106052543, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.149:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.149:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:01:16.123830 kubelet[2299]: I0805 22:01:16.123776 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:01:16.126935 kubelet[2299]: I0805 22:01:16.124689 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:01:16.126935 kubelet[2299]: I0805 22:01:16.124716 2299 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:01:16.126935 kubelet[2299]: I0805 22:01:16.124735 2299 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:01:16.126935 kubelet[2299]: E0805 22:01:16.124799 2299 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:01:16.131967 kubelet[2299]: W0805 22:01:16.131914 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.132116 kubelet[2299]: E0805 22:01:16.132102 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:16.153675 kubelet[2299]: I0805 22:01:16.153648 2299 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:01:16.153675 kubelet[2299]: I0805 22:01:16.153671 2299 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:01:16.153813 kubelet[2299]: I0805 22:01:16.153689 2299 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:01:16.168191 kubelet[2299]: I0805 22:01:16.168150 2299 policy_none.go:49] "None policy: Start" Aug 5 22:01:16.168836 kubelet[2299]: I0805 22:01:16.168800 2299 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:01:16.168836 kubelet[2299]: I0805 22:01:16.168830 2299 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:01:16.174150 kubelet[2299]: I0805 22:01:16.172760 2299 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:01:16.174150 kubelet[2299]: I0805 22:01:16.173057 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:01:16.174661 kubelet[2299]: E0805 22:01:16.174639 2299 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 22:01:16.209845 kubelet[2299]: I0805 22:01:16.209816 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:01:16.210320 kubelet[2299]: E0805 22:01:16.210301 2299 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Aug 5 22:01:16.225619 kubelet[2299]: I0805 22:01:16.225597 2299 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:01:16.226646 kubelet[2299]: I0805 22:01:16.226627 2299 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:01:16.227368 kubelet[2299]: I0805 22:01:16.227346 2299 topology_manager.go:215] "Topology Admit Handler" podUID="d4d155bcbea892eded3ca1a1b3daeeb2" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:01:16.309595 kubelet[2299]: E0805 22:01:16.309481 2299 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="400ms" Aug 5 22:01:16.409933 kubelet[2299]: I0805 22:01:16.409866 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4d155bcbea892eded3ca1a1b3daeeb2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4d155bcbea892eded3ca1a1b3daeeb2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:01:16.409933 kubelet[2299]: I0805 22:01:16.409910 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4d155bcbea892eded3ca1a1b3daeeb2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d4d155bcbea892eded3ca1a1b3daeeb2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:01:16.409933 kubelet[2299]: I0805 22:01:16.409937 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:16.410167 kubelet[2299]: I0805 22:01:16.409957 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:16.410167 kubelet[2299]: I0805 22:01:16.409980 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:16.410167 kubelet[2299]: I0805 22:01:16.409999 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:01:16.410167 kubelet[2299]: I0805 22:01:16.410019 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4d155bcbea892eded3ca1a1b3daeeb2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4d155bcbea892eded3ca1a1b3daeeb2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:01:16.410167 kubelet[2299]: I0805 22:01:16.410040 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:16.410267 kubelet[2299]: I0805 22:01:16.410060 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:16.411761 kubelet[2299]: I0805 22:01:16.411728 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:01:16.412068 kubelet[2299]: E0805 22:01:16.412052 2299 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Aug 5 22:01:16.532273 kubelet[2299]: E0805 22:01:16.532235 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:16.532627 kubelet[2299]: E0805 22:01:16.532241 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:16.532971 kubelet[2299]: E0805 22:01:16.532780 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:16.533025 containerd[1552]: time="2024-08-05T22:01:16.532973470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,}" Aug 5 22:01:16.533229 containerd[1552]: time="2024-08-05T22:01:16.533079889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d4d155bcbea892eded3ca1a1b3daeeb2,Namespace:kube-system,Attempt:0,}" Aug 5 22:01:16.533319 containerd[1552]: time="2024-08-05T22:01:16.533248587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,}" Aug 5 22:01:16.711031 kubelet[2299]: E0805 22:01:16.710942 2299 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="800ms" Aug 5 22:01:16.813309 kubelet[2299]: I0805 22:01:16.813263 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:01:16.813622 kubelet[2299]: E0805 22:01:16.813588 2299 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Aug 5 22:01:17.025005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990695299.mount: Deactivated successfully. Aug 5 22:01:17.030013 containerd[1552]: time="2024-08-05T22:01:17.029957009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:01:17.031425 containerd[1552]: time="2024-08-05T22:01:17.031374095Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:01:17.032542 containerd[1552]: time="2024-08-05T22:01:17.032516139Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 5 22:01:17.033042 containerd[1552]: time="2024-08-05T22:01:17.033022995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:01:17.033928 containerd[1552]: time="2024-08-05T22:01:17.033615712Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:01:17.034498 containerd[1552]: time="2024-08-05T22:01:17.034470847Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:01:17.035007 containerd[1552]: time="2024-08-05T22:01:17.034973947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:01:17.037234 containerd[1552]: time="2024-08-05T22:01:17.037190312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:01:17.038798 containerd[1552]: time="2024-08-05T22:01:17.038763699Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 505.696671ms" Aug 5 22:01:17.041760 containerd[1552]: time="2024-08-05T22:01:17.041595715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.304903ms" Aug 5 22:01:17.042612 containerd[1552]: time="2024-08-05T22:01:17.042578942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.431061ms" Aug 5 22:01:17.149390 kubelet[2299]: W0805 22:01:17.147578 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:17.149390 kubelet[2299]: E0805 22:01:17.147644 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:17.208943 containerd[1552]: time="2024-08-05T22:01:17.208504238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:01:17.208943 containerd[1552]: time="2024-08-05T22:01:17.208582308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:17.208943 containerd[1552]: time="2024-08-05T22:01:17.208607399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:01:17.208943 containerd[1552]: time="2024-08-05T22:01:17.208621903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:17.210677 containerd[1552]: time="2024-08-05T22:01:17.210384112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:01:17.210677 containerd[1552]: time="2024-08-05T22:01:17.210456229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:17.210677 containerd[1552]: time="2024-08-05T22:01:17.210493026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:01:17.210677 containerd[1552]: time="2024-08-05T22:01:17.210507370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:17.210677 containerd[1552]: time="2024-08-05T22:01:17.210641335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:01:17.210824 containerd[1552]: time="2024-08-05T22:01:17.210682847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:17.210824 containerd[1552]: time="2024-08-05T22:01:17.210713972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:01:17.210824 containerd[1552]: time="2024-08-05T22:01:17.210728275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:17.223711 kubelet[2299]: W0805 22:01:17.223658 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:17.224735 kubelet[2299]: E0805 22:01:17.223847 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:17.253897 containerd[1552]: time="2024-08-05T22:01:17.253808666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d4d155bcbea892eded3ca1a1b3daeeb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2147f43244276c93f69814fcf3dc8449949e7f6dfc8a4860e80db10e7ab54a5d\"" Aug 5 22:01:17.255060 kubelet[2299]: E0805 22:01:17.255025 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:17.261151 containerd[1552]: time="2024-08-05T22:01:17.261109812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,} returns sandbox id \"5027ca3e262739e67d2ac33fff954c572d342e3aa869a758c52beb6615e63c68\"" Aug 5 22:01:17.261556 containerd[1552]: time="2024-08-05T22:01:17.261519180Z" level=info msg="CreateContainer within sandbox \"2147f43244276c93f69814fcf3dc8449949e7f6dfc8a4860e80db10e7ab54a5d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:01:17.261767 containerd[1552]: time="2024-08-05T22:01:17.261737329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,} returns sandbox id \"90b518a123a23a9a70e2fdf2a9215ef10146db4577b073f7ec3a5a3bbb23dd0e\"" Aug 5 22:01:17.262080 kubelet[2299]: E0805 22:01:17.262057 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:17.263641 kubelet[2299]: E0805 22:01:17.263457 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:17.264978 containerd[1552]: time="2024-08-05T22:01:17.264949267Z" level=info msg="CreateContainer within sandbox \"5027ca3e262739e67d2ac33fff954c572d342e3aa869a758c52beb6615e63c68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:01:17.265817 containerd[1552]: time="2024-08-05T22:01:17.265699962Z" level=info msg="CreateContainer within sandbox \"90b518a123a23a9a70e2fdf2a9215ef10146db4577b073f7ec3a5a3bbb23dd0e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:01:17.283259 containerd[1552]: time="2024-08-05T22:01:17.283099829Z" level=info msg="CreateContainer within sandbox \"2147f43244276c93f69814fcf3dc8449949e7f6dfc8a4860e80db10e7ab54a5d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e303b9180012a5f81a62d1ef3352324175f4ea261863cb8b7e848c07d1ea262b\"" Aug 5 22:01:17.284004 containerd[1552]: time="2024-08-05T22:01:17.283679281Z" level=info msg="StartContainer for \"e303b9180012a5f81a62d1ef3352324175f4ea261863cb8b7e848c07d1ea262b\"" Aug 5 22:01:17.284004 containerd[1552]: time="2024-08-05T22:01:17.283720634Z" level=info msg="CreateContainer within sandbox \"5027ca3e262739e67d2ac33fff954c572d342e3aa869a758c52beb6615e63c68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a68a7b3d4997a02bd381c50dcd813e366978c20a553ff037d06a901c8d057051\"" Aug 5 22:01:17.284727 containerd[1552]: time="2024-08-05T22:01:17.284699985Z" level=info msg="StartContainer for \"a68a7b3d4997a02bd381c50dcd813e366978c20a553ff037d06a901c8d057051\"" Aug 5 22:01:17.285744 containerd[1552]: time="2024-08-05T22:01:17.285715095Z" level=info msg="CreateContainer within sandbox \"90b518a123a23a9a70e2fdf2a9215ef10146db4577b073f7ec3a5a3bbb23dd0e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"28cdd02a228e3f2159d2b8af442416ecb05e85f508718a2d9ad890ab6853753d\"" Aug 5 22:01:17.286754 containerd[1552]: time="2024-08-05T22:01:17.286085468Z" level=info msg="StartContainer for \"28cdd02a228e3f2159d2b8af442416ecb05e85f508718a2d9ad890ab6853753d\"" Aug 5 22:01:17.340424 containerd[1552]: time="2024-08-05T22:01:17.340295153Z" level=info msg="StartContainer for \"a68a7b3d4997a02bd381c50dcd813e366978c20a553ff037d06a901c8d057051\" returns successfully" Aug 5 22:01:17.342661 containerd[1552]: time="2024-08-05T22:01:17.342627745Z" level=info msg="StartContainer for \"e303b9180012a5f81a62d1ef3352324175f4ea261863cb8b7e848c07d1ea262b\" returns successfully" Aug 5 22:01:17.367405 containerd[1552]: time="2024-08-05T22:01:17.367295156Z" level=info msg="StartContainer for \"28cdd02a228e3f2159d2b8af442416ecb05e85f508718a2d9ad890ab6853753d\" returns successfully" Aug 5 22:01:17.375347 kubelet[2299]: W0805 22:01:17.375224 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:17.375347 kubelet[2299]: E0805 22:01:17.375291 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:17.459511 kubelet[2299]: W0805 22:01:17.459323 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:17.459511 kubelet[2299]: E0805 22:01:17.459389 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 5 22:01:17.512780 kubelet[2299]: E0805 22:01:17.512749 2299 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="1.6s" Aug 5 22:01:17.615188 kubelet[2299]: I0805 22:01:17.615081 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:01:18.141304 kubelet[2299]: E0805 22:01:18.141122 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:18.145659 kubelet[2299]: E0805 22:01:18.145626 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:18.148511 kubelet[2299]: E0805 22:01:18.148471 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:19.150372 kubelet[2299]: E0805 22:01:19.150344 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:19.223168 kubelet[2299]: E0805 22:01:19.223109 2299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 5 22:01:19.318307 kubelet[2299]: I0805 22:01:19.314538 2299 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 22:01:20.101254 kubelet[2299]: I0805 22:01:20.101215 2299 apiserver.go:52] "Watching apiserver" Aug 5 22:01:20.109501 kubelet[2299]: I0805 22:01:20.109458 2299 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:01:21.371884 kubelet[2299]: E0805 22:01:21.371808 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:22.090383 systemd[1]: Reloading requested from client PID 2579 ('systemctl') (unit session-7.scope)... Aug 5 22:01:22.090402 systemd[1]: Reloading... Aug 5 22:01:22.155563 kubelet[2299]: E0805 22:01:22.155470 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:22.170100 zram_generator::config[2616]: No configuration found. Aug 5 22:01:22.278761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:01:22.304774 kubelet[2299]: E0805 22:01:22.304638 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:22.340414 systemd[1]: Reloading finished in 249 ms. Aug 5 22:01:22.368889 kubelet[2299]: I0805 22:01:22.368799 2299 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:01:22.368832 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:01:22.385748 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:01:22.386158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:01:22.393250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:01:22.530302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:01:22.534756 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:01:22.587152 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:01:22.587152 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:01:22.587152 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:01:22.587152 kubelet[2668]: I0805 22:01:22.586938 2668 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:01:22.591689 kubelet[2668]: I0805 22:01:22.591600 2668 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:01:22.591689 kubelet[2668]: I0805 22:01:22.591624 2668 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:01:22.591789 kubelet[2668]: I0805 22:01:22.591780 2668 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:01:22.593259 kubelet[2668]: I0805 22:01:22.593236 2668 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:01:22.594290 kubelet[2668]: I0805 22:01:22.594257 2668 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:01:22.598500 kubelet[2668]: W0805 22:01:22.598474 2668 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 22:01:22.599240 kubelet[2668]: I0805 22:01:22.599220 2668 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:01:22.599629 kubelet[2668]: I0805 22:01:22.599617 2668 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:01:22.599792 kubelet[2668]: I0805 22:01:22.599778 2668 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:01:22.599885 kubelet[2668]: I0805 22:01:22.599801 2668 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:01:22.599885 kubelet[2668]: I0805 22:01:22.599810 2668 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:01:22.599885 kubelet[2668]: I0805 22:01:22.599843 2668 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:01:22.600011 kubelet[2668]: I0805 22:01:22.599999 2668 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:01:22.600038 kubelet[2668]: I0805 22:01:22.600018 2668 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:01:22.600060 kubelet[2668]: I0805 22:01:22.600041 2668 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:01:22.600060 kubelet[2668]: I0805 22:01:22.600052 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:01:22.602491 kubelet[2668]: I0805 22:01:22.600961 2668 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:01:22.602491 kubelet[2668]: I0805 22:01:22.601387 2668 server.go:1232] "Started kubelet" Aug 5 22:01:22.603224 kubelet[2668]: I0805 22:01:22.603197 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:01:22.603941 kubelet[2668]: I0805 22:01:22.603921 2668 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:01:22.604224 kubelet[2668]: I0805 22:01:22.604204 2668 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:01:22.604343 kubelet[2668]: I0805 22:01:22.604330 2668 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:01:22.606228 kubelet[2668]: I0805 22:01:22.606188 2668 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:01:22.607033 kubelet[2668]: I0805 22:01:22.607002 2668 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:01:22.607631 kubelet[2668]: I0805 22:01:22.607603 2668 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:01:22.609209 sudo[2683]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 5 22:01:22.609444 sudo[2683]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 5 22:01:22.609563 kubelet[2668]: E0805 22:01:22.609532 2668 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:01:22.609607 kubelet[2668]: E0805 22:01:22.609570 2668 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:01:22.634157 kubelet[2668]: I0805 22:01:22.634126 2668 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:01:22.636376 kubelet[2668]: I0805 22:01:22.636349 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:01:22.638581 kubelet[2668]: I0805 22:01:22.638562 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:01:22.638581 kubelet[2668]: I0805 22:01:22.638577 2668 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:01:22.638682 kubelet[2668]: I0805 22:01:22.638593 2668 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:01:22.638682 kubelet[2668]: E0805 22:01:22.638637 2668 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:01:22.696020 kubelet[2668]: I0805 22:01:22.695988 2668 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:01:22.696020 kubelet[2668]: I0805 22:01:22.696014 2668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:01:22.696020 kubelet[2668]: I0805 22:01:22.696030 2668 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:01:22.696808 kubelet[2668]: I0805 22:01:22.696239 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:01:22.696808 kubelet[2668]: I0805 22:01:22.696267 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:01:22.696808 kubelet[2668]: I0805 22:01:22.696274 2668 policy_none.go:49] "None policy: Start" Aug 5 22:01:22.696960 kubelet[2668]: I0805 22:01:22.696833 2668 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:01:22.696960 kubelet[2668]: I0805 22:01:22.696878 2668 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:01:22.697079 kubelet[2668]: I0805 22:01:22.697049 2668 state_mem.go:75] "Updated machine memory state" Aug 5 22:01:22.698249 kubelet[2668]: I0805 22:01:22.698225 2668 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:01:22.698790 kubelet[2668]: I0805 22:01:22.698445 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:01:22.711301 kubelet[2668]: I0805 22:01:22.711094 2668 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:01:22.719124 kubelet[2668]: I0805 22:01:22.719070 2668 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Aug 5 22:01:22.719224 kubelet[2668]: I0805 22:01:22.719144 2668 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 22:01:22.739354 kubelet[2668]: I0805 22:01:22.739320 2668 topology_manager.go:215] "Topology Admit Handler" podUID="d4d155bcbea892eded3ca1a1b3daeeb2" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:01:22.739461 kubelet[2668]: I0805 22:01:22.739445 2668 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:01:22.739505 kubelet[2668]: I0805 22:01:22.739492 2668 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:01:22.744221 kubelet[2668]: E0805 22:01:22.744033 2668 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:22.745123 kubelet[2668]: E0805 22:01:22.745097 2668 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:01:22.808667 kubelet[2668]: I0805 22:01:22.808628 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:22.808667 kubelet[2668]: I0805 22:01:22.808671 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:22.808819 kubelet[2668]: I0805 22:01:22.808694 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4d155bcbea892eded3ca1a1b3daeeb2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d4d155bcbea892eded3ca1a1b3daeeb2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:01:22.808819 kubelet[2668]: I0805 22:01:22.808713 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:22.808819 kubelet[2668]: I0805 22:01:22.808734 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:22.808819 kubelet[2668]: I0805 22:01:22.808752 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:01:22.808819 kubelet[2668]: I0805 22:01:22.808782 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:01:22.808956 kubelet[2668]: I0805 22:01:22.808802 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4d155bcbea892eded3ca1a1b3daeeb2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4d155bcbea892eded3ca1a1b3daeeb2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:01:22.808956 kubelet[2668]: I0805 22:01:22.808821 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4d155bcbea892eded3ca1a1b3daeeb2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d4d155bcbea892eded3ca1a1b3daeeb2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:01:23.045288 kubelet[2668]: E0805 22:01:23.045244 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:23.045755 kubelet[2668]: E0805 22:01:23.045739 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:23.046505 kubelet[2668]: E0805 22:01:23.046488 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:23.063734 sudo[2683]: pam_unix(sudo:session): session closed for user root Aug 5 22:01:23.600605 kubelet[2668]: I0805 22:01:23.600568 2668 apiserver.go:52] "Watching apiserver" Aug 5 22:01:23.608131 kubelet[2668]: I0805 22:01:23.608105 2668 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:01:23.655974 kubelet[2668]: E0805 22:01:23.655677 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:23.655974 kubelet[2668]: E0805 22:01:23.655689 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:23.663121 kubelet[2668]: E0805 22:01:23.663024 2668 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:01:23.663498 kubelet[2668]: E0805 22:01:23.663485 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:23.680226 kubelet[2668]: I0805 22:01:23.680191 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.68014058 podCreationTimestamp="2024-08-05 22:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:01:23.674984071 +0000 UTC m=+1.136919098" watchObservedRunningTime="2024-08-05 22:01:23.68014058 +0000 UTC m=+1.142075607" Aug 5 22:01:23.686681 kubelet[2668]: I0805 22:01:23.686642 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.686606936 podCreationTimestamp="2024-08-05 22:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:01:23.680390161 +0000 UTC m=+1.142325188" watchObservedRunningTime="2024-08-05 22:01:23.686606936 +0000 UTC m=+1.148541963" Aug 5 22:01:23.701893 kubelet[2668]: I0805 22:01:23.699300 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.6992608540000003 podCreationTimestamp="2024-08-05 22:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:01:23.686744031 +0000 UTC m=+1.148679058" watchObservedRunningTime="2024-08-05 22:01:23.699260854 +0000 UTC m=+1.161195881" Aug 5 22:01:24.657718 kubelet[2668]: E0805 22:01:24.657685 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:25.790040 sudo[1768]: pam_unix(sudo:session): session closed for user root Aug 5 22:01:25.791535 sshd[1758]: pam_unix(sshd:session): session closed for user core Aug 5 22:01:25.796357 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:32888.service: Deactivated successfully. Aug 5 22:01:25.798283 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:01:25.798402 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:01:25.799308 systemd-logind[1535]: Removed session 7. Aug 5 22:01:28.185549 kubelet[2668]: E0805 22:01:28.184984 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:28.665811 kubelet[2668]: E0805 22:01:28.665397 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:30.250318 kubelet[2668]: E0805 22:01:30.247459 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:30.586936 kubelet[2668]: E0805 22:01:30.586823 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:30.669227 kubelet[2668]: E0805 22:01:30.668733 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:30.669227 kubelet[2668]: E0805 22:01:30.669083 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:34.114536 update_engine[1537]: I0805 22:01:34.112107 1537 update_attempter.cc:509] Updating boot flags... Aug 5 22:01:34.136782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2751) Aug 5 22:01:34.161896 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2755) Aug 5 22:01:36.435719 kubelet[2668]: I0805 22:01:36.435525 2668 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:01:36.436097 containerd[1552]: time="2024-08-05T22:01:36.435888267Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:01:36.436956 kubelet[2668]: I0805 22:01:36.436248 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:01:36.440579 kubelet[2668]: I0805 22:01:36.440494 2668 topology_manager.go:215] "Topology Admit Handler" podUID="fb7cf228-6240-4d0b-8963-0f97efd0015c" podNamespace="kube-system" podName="kube-proxy-lmdgm" Aug 5 22:01:36.440579 kubelet[2668]: I0805 22:01:36.440613 2668 topology_manager.go:215] "Topology Admit Handler" podUID="f653ea32-a506-448a-9bb9-9e5c58285393" podNamespace="kube-system" podName="cilium-dp4fq" Aug 5 22:01:36.496020 kubelet[2668]: I0805 22:01:36.495991 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb7cf228-6240-4d0b-8963-0f97efd0015c-xtables-lock\") pod \"kube-proxy-lmdgm\" (UID: \"fb7cf228-6240-4d0b-8963-0f97efd0015c\") " pod="kube-system/kube-proxy-lmdgm" Aug 5 22:01:36.496020 kubelet[2668]: I0805 22:01:36.496032 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb7cf228-6240-4d0b-8963-0f97efd0015c-lib-modules\") pod \"kube-proxy-lmdgm\" (UID: \"fb7cf228-6240-4d0b-8963-0f97efd0015c\") " pod="kube-system/kube-proxy-lmdgm" Aug 5 22:01:36.496389 kubelet[2668]: I0805 22:01:36.496051 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-etc-cni-netd\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496389 kubelet[2668]: I0805 22:01:36.496072 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cni-path\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496389 kubelet[2668]: I0805 22:01:36.496092 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-config-path\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496389 kubelet[2668]: I0805 22:01:36.496111 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-host-proc-sys-net\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496389 kubelet[2668]: I0805 22:01:36.496136 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb7cf228-6240-4d0b-8963-0f97efd0015c-kube-proxy\") pod \"kube-proxy-lmdgm\" (UID: \"fb7cf228-6240-4d0b-8963-0f97efd0015c\") " pod="kube-system/kube-proxy-lmdgm" Aug 5 22:01:36.496389 kubelet[2668]: I0805 22:01:36.496155 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-bpf-maps\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496515 kubelet[2668]: I0805 22:01:36.496176 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-run\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496515 kubelet[2668]: I0805 22:01:36.496195 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f653ea32-a506-448a-9bb9-9e5c58285393-clustermesh-secrets\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496515 kubelet[2668]: I0805 22:01:36.496214 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-hubble-tls\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496515 kubelet[2668]: I0805 22:01:36.496233 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fsc7\" (UniqueName: \"kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-kube-api-access-8fsc7\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496515 kubelet[2668]: I0805 22:01:36.496251 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-cgroup\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496515 kubelet[2668]: I0805 22:01:36.496268 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-xtables-lock\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496836 kubelet[2668]: I0805 22:01:36.496306 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-host-proc-sys-kernel\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496836 kubelet[2668]: I0805 22:01:36.496694 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7hr\" (UniqueName: \"kubernetes.io/projected/fb7cf228-6240-4d0b-8963-0f97efd0015c-kube-api-access-gn7hr\") pod \"kube-proxy-lmdgm\" (UID: \"fb7cf228-6240-4d0b-8963-0f97efd0015c\") " pod="kube-system/kube-proxy-lmdgm" Aug 5 22:01:36.496836 kubelet[2668]: I0805 22:01:36.496767 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-hostproc\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.496836 kubelet[2668]: I0805 22:01:36.496792 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-lib-modules\") pod \"cilium-dp4fq\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " pod="kube-system/cilium-dp4fq" Aug 5 22:01:36.607975 kubelet[2668]: E0805 22:01:36.607905 2668 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 22:01:36.607975 kubelet[2668]: E0805 22:01:36.607939 2668 projected.go:198] Error preparing data for projected volume kube-api-access-8fsc7 for pod kube-system/cilium-dp4fq: configmap "kube-root-ca.crt" not found Aug 5 22:01:36.608078 kubelet[2668]: E0805 22:01:36.607997 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-kube-api-access-8fsc7 podName:f653ea32-a506-448a-9bb9-9e5c58285393 nodeName:}" failed. No retries permitted until 2024-08-05 22:01:37.107978765 +0000 UTC m=+14.569913792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8fsc7" (UniqueName: "kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-kube-api-access-8fsc7") pod "cilium-dp4fq" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393") : configmap "kube-root-ca.crt" not found Aug 5 22:01:36.611609 kubelet[2668]: E0805 22:01:36.611571 2668 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 22:01:36.611609 kubelet[2668]: E0805 22:01:36.611597 2668 projected.go:198] Error preparing data for projected volume kube-api-access-gn7hr for pod kube-system/kube-proxy-lmdgm: configmap "kube-root-ca.crt" not found Aug 5 22:01:36.611715 kubelet[2668]: E0805 22:01:36.611631 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb7cf228-6240-4d0b-8963-0f97efd0015c-kube-api-access-gn7hr podName:fb7cf228-6240-4d0b-8963-0f97efd0015c nodeName:}" failed. No retries permitted until 2024-08-05 22:01:37.11161838 +0000 UTC m=+14.573553407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gn7hr" (UniqueName: "kubernetes.io/projected/fb7cf228-6240-4d0b-8963-0f97efd0015c-kube-api-access-gn7hr") pod "kube-proxy-lmdgm" (UID: "fb7cf228-6240-4d0b-8963-0f97efd0015c") : configmap "kube-root-ca.crt" not found Aug 5 22:01:37.343928 kubelet[2668]: E0805 22:01:37.343881 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:37.344842 containerd[1552]: time="2024-08-05T22:01:37.344424762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmdgm,Uid:fb7cf228-6240-4d0b-8963-0f97efd0015c,Namespace:kube-system,Attempt:0,}" Aug 5 22:01:37.348967 kubelet[2668]: E0805 22:01:37.348936 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:37.350279 containerd[1552]: time="2024-08-05T22:01:37.350192471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dp4fq,Uid:f653ea32-a506-448a-9bb9-9e5c58285393,Namespace:kube-system,Attempt:0,}" Aug 5 22:01:37.374174 containerd[1552]: time="2024-08-05T22:01:37.374082827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:01:37.374174 containerd[1552]: time="2024-08-05T22:01:37.374150320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:37.374356 containerd[1552]: time="2024-08-05T22:01:37.374169444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:01:37.374356 containerd[1552]: time="2024-08-05T22:01:37.374185567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:37.375063 containerd[1552]: time="2024-08-05T22:01:37.374807566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:01:37.375063 containerd[1552]: time="2024-08-05T22:01:37.374850175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:37.375063 containerd[1552]: time="2024-08-05T22:01:37.374906185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:01:37.375063 containerd[1552]: time="2024-08-05T22:01:37.374940232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:37.407210 containerd[1552]: time="2024-08-05T22:01:37.407168232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dp4fq,Uid:f653ea32-a506-448a-9bb9-9e5c58285393,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\"" Aug 5 22:01:37.408027 kubelet[2668]: E0805 22:01:37.408004 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:37.409716 containerd[1552]: time="2024-08-05T22:01:37.409670873Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 5 22:01:37.417240 containerd[1552]: time="2024-08-05T22:01:37.417212004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmdgm,Uid:fb7cf228-6240-4d0b-8963-0f97efd0015c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ea4f0744c16e29c76326d9582295c2993a99df72dd0b8efa86dc769a27aef1b\"" Aug 5 22:01:37.420426 kubelet[2668]: E0805 22:01:37.419079 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:37.422931 containerd[1552]: time="2024-08-05T22:01:37.422892056Z" level=info msg="CreateContainer within sandbox \"9ea4f0744c16e29c76326d9582295c2993a99df72dd0b8efa86dc769a27aef1b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:01:37.432793 kubelet[2668]: I0805 22:01:37.430219 2668 topology_manager.go:215] "Topology Admit Handler" podUID="51f38ed6-9515-4a6b-94c2-ffd99f45174c" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-8wz7b" Aug 5 22:01:37.459092 containerd[1552]: time="2024-08-05T22:01:37.458994481Z" level=info msg="CreateContainer within sandbox \"9ea4f0744c16e29c76326d9582295c2993a99df72dd0b8efa86dc769a27aef1b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8a4b23fa9d65a3b383e37dc8ae57044bf58951253617bee58c2fa48181f7349\"" Aug 5 22:01:37.460667 containerd[1552]: time="2024-08-05T22:01:37.460501531Z" level=info msg="StartContainer for \"f8a4b23fa9d65a3b383e37dc8ae57044bf58951253617bee58c2fa48181f7349\"" Aug 5 22:01:37.503455 kubelet[2668]: I0805 22:01:37.503401 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51f38ed6-9515-4a6b-94c2-ffd99f45174c-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-8wz7b\" (UID: \"51f38ed6-9515-4a6b-94c2-ffd99f45174c\") " pod="kube-system/cilium-operator-6bc8ccdb58-8wz7b" Aug 5 22:01:37.503455 kubelet[2668]: I0805 22:01:37.503450 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw9v6\" (UniqueName: \"kubernetes.io/projected/51f38ed6-9515-4a6b-94c2-ffd99f45174c-kube-api-access-lw9v6\") pod \"cilium-operator-6bc8ccdb58-8wz7b\" (UID: \"51f38ed6-9515-4a6b-94c2-ffd99f45174c\") " pod="kube-system/cilium-operator-6bc8ccdb58-8wz7b" Aug 5 22:01:37.513595 containerd[1552]: time="2024-08-05T22:01:37.512796951Z" level=info msg="StartContainer for \"f8a4b23fa9d65a3b383e37dc8ae57044bf58951253617bee58c2fa48181f7349\" returns successfully" Aug 5 22:01:37.681984 kubelet[2668]: E0805 22:01:37.681393 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:37.691272 kubelet[2668]: I0805 22:01:37.691108 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lmdgm" podStartSLOduration=1.691071685 podCreationTimestamp="2024-08-05 22:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:01:37.690154709 +0000 UTC m=+15.152089736" watchObservedRunningTime="2024-08-05 22:01:37.691071685 +0000 UTC m=+15.153006712" Aug 5 22:01:37.753796 kubelet[2668]: E0805 22:01:37.753762 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:37.754252 containerd[1552]: time="2024-08-05T22:01:37.754201069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-8wz7b,Uid:51f38ed6-9515-4a6b-94c2-ffd99f45174c,Namespace:kube-system,Attempt:0,}" Aug 5 22:01:37.773494 containerd[1552]: time="2024-08-05T22:01:37.773395242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:01:37.773494 containerd[1552]: time="2024-08-05T22:01:37.773452573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:37.773494 containerd[1552]: time="2024-08-05T22:01:37.773471496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:01:37.774270 containerd[1552]: time="2024-08-05T22:01:37.773485219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:37.834396 containerd[1552]: time="2024-08-05T22:01:37.834356249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-8wz7b,Uid:51f38ed6-9515-4a6b-94c2-ffd99f45174c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532\"" Aug 5 22:01:37.835198 kubelet[2668]: E0805 22:01:37.835181 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:40.096496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197093246.mount: Deactivated successfully. Aug 5 22:01:41.367989 containerd[1552]: time="2024-08-05T22:01:41.367937603Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:41.368513 containerd[1552]: time="2024-08-05T22:01:41.368347068Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651498" Aug 5 22:01:41.369310 containerd[1552]: time="2024-08-05T22:01:41.369279458Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:41.370959 containerd[1552]: time="2024-08-05T22:01:41.370923921Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 3.9612132s" Aug 5 22:01:41.371000 containerd[1552]: time="2024-08-05T22:01:41.370968688Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 5 22:01:41.374879 containerd[1552]: time="2024-08-05T22:01:41.374669840Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 5 22:01:41.378181 containerd[1552]: time="2024-08-05T22:01:41.378145436Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 5 22:01:41.402988 containerd[1552]: time="2024-08-05T22:01:41.402942283Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e\"" Aug 5 22:01:41.406427 containerd[1552]: time="2024-08-05T22:01:41.405364310Z" level=info msg="StartContainer for \"9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e\"" Aug 5 22:01:41.464096 containerd[1552]: time="2024-08-05T22:01:41.464042457Z" level=info msg="StartContainer for \"9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e\" returns successfully" Aug 5 22:01:41.649471 containerd[1552]: time="2024-08-05T22:01:41.649335500Z" level=info msg="shim disconnected" id=9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e namespace=k8s.io Aug 5 22:01:41.649471 containerd[1552]: time="2024-08-05T22:01:41.649390029Z" level=warning msg="cleaning up after shim disconnected" id=9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e namespace=k8s.io Aug 5 22:01:41.649471 containerd[1552]: time="2024-08-05T22:01:41.649402511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:01:41.699352 kubelet[2668]: E0805 22:01:41.699307 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:41.703065 containerd[1552]: time="2024-08-05T22:01:41.703014887Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 5 22:01:41.712697 containerd[1552]: time="2024-08-05T22:01:41.712645548Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145\"" Aug 5 22:01:41.713205 containerd[1552]: time="2024-08-05T22:01:41.713053213Z" level=info msg="StartContainer for \"3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145\"" Aug 5 22:01:41.755339 containerd[1552]: time="2024-08-05T22:01:41.755295171Z" level=info msg="StartContainer for \"3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145\" returns successfully" Aug 5 22:01:41.763795 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:01:41.764617 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:01:41.764696 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:01:41.771195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:01:41.787271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:01:41.795894 containerd[1552]: time="2024-08-05T22:01:41.795791850Z" level=info msg="shim disconnected" id=3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145 namespace=k8s.io Aug 5 22:01:41.796055 containerd[1552]: time="2024-08-05T22:01:41.795973199Z" level=warning msg="cleaning up after shim disconnected" id=3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145 namespace=k8s.io Aug 5 22:01:41.796055 containerd[1552]: time="2024-08-05T22:01:41.795987161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:01:42.389009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e-rootfs.mount: Deactivated successfully. Aug 5 22:01:42.572458 containerd[1552]: time="2024-08-05T22:01:42.572401727Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:42.572933 containerd[1552]: time="2024-08-05T22:01:42.572861037Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138334" Aug 5 22:01:42.573985 containerd[1552]: time="2024-08-05T22:01:42.573940122Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:01:42.575559 containerd[1552]: time="2024-08-05T22:01:42.575519524Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.200811998s" Aug 5 22:01:42.575595 containerd[1552]: time="2024-08-05T22:01:42.575560930Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 5 22:01:42.577533 containerd[1552]: time="2024-08-05T22:01:42.577500307Z" level=info msg="CreateContainer within sandbox \"cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 5 22:01:42.589639 containerd[1552]: time="2024-08-05T22:01:42.589580277Z" level=info msg="CreateContainer within sandbox \"cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\"" Aug 5 22:01:42.590783 containerd[1552]: time="2024-08-05T22:01:42.590756377Z" level=info msg="StartContainer for \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\"" Aug 5 22:01:42.611491 systemd[1]: run-containerd-runc-k8s.io-bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60-runc.tU7ByI.mount: Deactivated successfully. Aug 5 22:01:42.633342 containerd[1552]: time="2024-08-05T22:01:42.633293409Z" level=info msg="StartContainer for \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\" returns successfully" Aug 5 22:01:42.700532 kubelet[2668]: E0805 22:01:42.700381 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:42.705104 kubelet[2668]: E0805 22:01:42.704294 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:42.714620 containerd[1552]: time="2024-08-05T22:01:42.713052341Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 5 22:01:42.714751 kubelet[2668]: I0805 22:01:42.713644 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-8wz7b" podStartSLOduration=0.973336825 podCreationTimestamp="2024-08-05 22:01:37 +0000 UTC" firstStartedPulling="2024-08-05 22:01:37.835841534 +0000 UTC m=+15.297776561" lastFinishedPulling="2024-08-05 22:01:42.576034883 +0000 UTC m=+20.037969910" observedRunningTime="2024-08-05 22:01:42.710378171 +0000 UTC m=+20.172313198" watchObservedRunningTime="2024-08-05 22:01:42.713530174 +0000 UTC m=+20.175465201" Aug 5 22:01:42.739727 containerd[1552]: time="2024-08-05T22:01:42.739616688Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592\"" Aug 5 22:01:42.740432 containerd[1552]: time="2024-08-05T22:01:42.740394607Z" level=info msg="StartContainer for \"9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592\"" Aug 5 22:01:42.818129 containerd[1552]: time="2024-08-05T22:01:42.817972324Z" level=info msg="StartContainer for \"9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592\" returns successfully" Aug 5 22:01:42.918200 containerd[1552]: time="2024-08-05T22:01:42.918135580Z" level=info msg="shim disconnected" id=9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592 namespace=k8s.io Aug 5 22:01:42.918200 containerd[1552]: time="2024-08-05T22:01:42.918194789Z" level=warning msg="cleaning up after shim disconnected" id=9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592 namespace=k8s.io Aug 5 22:01:42.918200 containerd[1552]: time="2024-08-05T22:01:42.918206030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:01:43.708027 kubelet[2668]: E0805 22:01:43.707986 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:43.708815 kubelet[2668]: E0805 22:01:43.708799 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:43.711265 containerd[1552]: time="2024-08-05T22:01:43.711228307Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 5 22:01:43.727464 containerd[1552]: time="2024-08-05T22:01:43.727407680Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177\"" Aug 5 22:01:43.728019 containerd[1552]: time="2024-08-05T22:01:43.727982364Z" level=info msg="StartContainer for \"92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177\"" Aug 5 22:01:43.768251 containerd[1552]: time="2024-08-05T22:01:43.768209344Z" level=info msg="StartContainer for \"92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177\" returns successfully" Aug 5 22:01:43.787630 containerd[1552]: time="2024-08-05T22:01:43.787574344Z" level=info msg="shim disconnected" id=92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177 namespace=k8s.io Aug 5 22:01:43.788005 containerd[1552]: time="2024-08-05T22:01:43.787817420Z" level=warning msg="cleaning up after shim disconnected" id=92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177 namespace=k8s.io Aug 5 22:01:43.788005 containerd[1552]: time="2024-08-05T22:01:43.787833262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:01:44.394154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177-rootfs.mount: Deactivated successfully. Aug 5 22:01:44.712661 kubelet[2668]: E0805 22:01:44.712631 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:44.719760 containerd[1552]: time="2024-08-05T22:01:44.719606498Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 5 22:01:44.734738 containerd[1552]: time="2024-08-05T22:01:44.734688538Z" level=info msg="CreateContainer within sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\"" Aug 5 22:01:44.735259 containerd[1552]: time="2024-08-05T22:01:44.735230535Z" level=info msg="StartContainer for \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\"" Aug 5 22:01:44.791697 containerd[1552]: time="2024-08-05T22:01:44.791625265Z" level=info msg="StartContainer for \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\" returns successfully" Aug 5 22:01:44.922895 kubelet[2668]: I0805 22:01:44.922266 2668 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 22:01:44.944947 kubelet[2668]: I0805 22:01:44.944401 2668 topology_manager.go:215] "Topology Admit Handler" podUID="dc9fd34c-c17a-411a-9172-18eab2b9a669" podNamespace="kube-system" podName="coredns-5dd5756b68-vkxc5" Aug 5 22:01:44.944947 kubelet[2668]: I0805 22:01:44.944665 2668 topology_manager.go:215] "Topology Admit Handler" podUID="5e3ea45f-b17f-4dfa-a990-3ed04b0978e6" podNamespace="kube-system" podName="coredns-5dd5756b68-5b94x" Aug 5 22:01:45.056046 kubelet[2668]: I0805 22:01:45.055924 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqjlk\" (UniqueName: \"kubernetes.io/projected/dc9fd34c-c17a-411a-9172-18eab2b9a669-kube-api-access-qqjlk\") pod \"coredns-5dd5756b68-vkxc5\" (UID: \"dc9fd34c-c17a-411a-9172-18eab2b9a669\") " pod="kube-system/coredns-5dd5756b68-vkxc5" Aug 5 22:01:45.056381 kubelet[2668]: I0805 22:01:45.056227 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc9fd34c-c17a-411a-9172-18eab2b9a669-config-volume\") pod \"coredns-5dd5756b68-vkxc5\" (UID: \"dc9fd34c-c17a-411a-9172-18eab2b9a669\") " pod="kube-system/coredns-5dd5756b68-vkxc5" Aug 5 22:01:45.056381 kubelet[2668]: I0805 22:01:45.056259 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e3ea45f-b17f-4dfa-a990-3ed04b0978e6-config-volume\") pod \"coredns-5dd5756b68-5b94x\" (UID: \"5e3ea45f-b17f-4dfa-a990-3ed04b0978e6\") " pod="kube-system/coredns-5dd5756b68-5b94x" Aug 5 22:01:45.056381 kubelet[2668]: I0805 22:01:45.056324 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfrc6\" (UniqueName: \"kubernetes.io/projected/5e3ea45f-b17f-4dfa-a990-3ed04b0978e6-kube-api-access-zfrc6\") pod \"coredns-5dd5756b68-5b94x\" (UID: \"5e3ea45f-b17f-4dfa-a990-3ed04b0978e6\") " pod="kube-system/coredns-5dd5756b68-5b94x" Aug 5 22:01:45.249229 kubelet[2668]: E0805 22:01:45.249106 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:45.249572 kubelet[2668]: E0805 22:01:45.249443 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:45.250581 containerd[1552]: time="2024-08-05T22:01:45.250204977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5b94x,Uid:5e3ea45f-b17f-4dfa-a990-3ed04b0978e6,Namespace:kube-system,Attempt:0,}" Aug 5 22:01:45.250954 containerd[1552]: time="2024-08-05T22:01:45.250917153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vkxc5,Uid:dc9fd34c-c17a-411a-9172-18eab2b9a669,Namespace:kube-system,Attempt:0,}" Aug 5 22:01:45.716687 kubelet[2668]: E0805 22:01:45.716553 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:45.729036 kubelet[2668]: I0805 22:01:45.728770 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dp4fq" podStartSLOduration=5.764442638 podCreationTimestamp="2024-08-05 22:01:36 +0000 UTC" firstStartedPulling="2024-08-05 22:01:37.408889283 +0000 UTC m=+14.870824310" lastFinishedPulling="2024-08-05 22:01:41.373181682 +0000 UTC m=+18.835116709" observedRunningTime="2024-08-05 22:01:45.728400112 +0000 UTC m=+23.190335139" watchObservedRunningTime="2024-08-05 22:01:45.728735037 +0000 UTC m=+23.190670064" Aug 5 22:01:46.693897 systemd-networkd[1234]: cilium_host: Link UP Aug 5 22:01:46.694114 systemd-networkd[1234]: cilium_net: Link UP Aug 5 22:01:46.694117 systemd-networkd[1234]: cilium_net: Gained carrier Aug 5 22:01:46.694252 systemd-networkd[1234]: cilium_host: Gained carrier Aug 5 22:01:46.694572 systemd-networkd[1234]: cilium_host: Gained IPv6LL Aug 5 22:01:46.719276 kubelet[2668]: E0805 22:01:46.719251 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:46.782701 systemd-networkd[1234]: cilium_vxlan: Link UP Aug 5 22:01:46.782707 systemd-networkd[1234]: cilium_vxlan: Gained carrier Aug 5 22:01:46.942017 systemd-networkd[1234]: cilium_net: Gained IPv6LL Aug 5 22:01:47.082889 kernel: NET: Registered PF_ALG protocol family Aug 5 22:01:47.670930 systemd-networkd[1234]: lxc_health: Link UP Aug 5 22:01:47.687634 systemd-networkd[1234]: lxc_health: Gained carrier Aug 5 22:01:47.720889 kubelet[2668]: E0805 22:01:47.720671 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:47.882016 systemd-networkd[1234]: lxcb08e8df5e783: Link UP Aug 5 22:01:47.891889 kernel: eth0: renamed from tmp7255b Aug 5 22:01:47.907866 systemd-networkd[1234]: lxcb08e8df5e783: Gained carrier Aug 5 22:01:47.908554 systemd-networkd[1234]: lxc7a8e6c73c081: Link UP Aug 5 22:01:47.916962 kernel: eth0: renamed from tmp9876d Aug 5 22:01:47.923116 systemd-networkd[1234]: lxc7a8e6c73c081: Gained carrier Aug 5 22:01:48.726026 systemd-networkd[1234]: cilium_vxlan: Gained IPv6LL Aug 5 22:01:49.302004 systemd-networkd[1234]: lxcb08e8df5e783: Gained IPv6LL Aug 5 22:01:49.351560 kubelet[2668]: E0805 22:01:49.351437 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:49.431014 systemd-networkd[1234]: lxc_health: Gained IPv6LL Aug 5 22:01:49.724249 kubelet[2668]: E0805 22:01:49.724224 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:49.813974 systemd-networkd[1234]: lxc7a8e6c73c081: Gained IPv6LL Aug 5 22:01:50.725973 kubelet[2668]: E0805 22:01:50.725941 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:51.377570 containerd[1552]: time="2024-08-05T22:01:51.377486859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:01:51.377570 containerd[1552]: time="2024-08-05T22:01:51.377546746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:51.377570 containerd[1552]: time="2024-08-05T22:01:51.377560587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:01:51.377570 containerd[1552]: time="2024-08-05T22:01:51.377570228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:51.378909 containerd[1552]: time="2024-08-05T22:01:51.378264743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:01:51.378909 containerd[1552]: time="2024-08-05T22:01:51.378304707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:51.378909 containerd[1552]: time="2024-08-05T22:01:51.378321909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:01:51.378909 containerd[1552]: time="2024-08-05T22:01:51.378334711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:01:51.402007 systemd-resolved[1446]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:01:51.404292 systemd-resolved[1446]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:01:51.422208 containerd[1552]: time="2024-08-05T22:01:51.422173871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vkxc5,Uid:dc9fd34c-c17a-411a-9172-18eab2b9a669,Namespace:kube-system,Attempt:0,} returns sandbox id \"7255bb2e49450ce566eb64f192db8c01c91d4b4a640a49368f3e898799d1afa3\"" Aug 5 22:01:51.422804 kubelet[2668]: E0805 22:01:51.422731 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:51.425061 containerd[1552]: time="2024-08-05T22:01:51.424996615Z" level=info msg="CreateContainer within sandbox \"7255bb2e49450ce566eb64f192db8c01c91d4b4a640a49368f3e898799d1afa3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:01:51.425960 containerd[1552]: time="2024-08-05T22:01:51.425881150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5b94x,Uid:5e3ea45f-b17f-4dfa-a990-3ed04b0978e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9876d49644d221c6c6f41c2a69880f44f3b76e6e104c2399d6203aea5a8b6d1a\"" Aug 5 22:01:51.426427 kubelet[2668]: E0805 22:01:51.426404 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:51.430243 containerd[1552]: time="2024-08-05T22:01:51.430202856Z" level=info msg="CreateContainer within sandbox \"9876d49644d221c6c6f41c2a69880f44f3b76e6e104c2399d6203aea5a8b6d1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:01:51.445295 containerd[1552]: time="2024-08-05T22:01:51.445253716Z" level=info msg="CreateContainer within sandbox \"9876d49644d221c6c6f41c2a69880f44f3b76e6e104c2399d6203aea5a8b6d1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd4ef66ff1a485fd5f4a716d1ccdbab154130f834ce369f8672c82dd92f57922\"" Aug 5 22:01:51.446515 containerd[1552]: time="2024-08-05T22:01:51.445881544Z" level=info msg="StartContainer for \"fd4ef66ff1a485fd5f4a716d1ccdbab154130f834ce369f8672c82dd92f57922\"" Aug 5 22:01:51.450463 containerd[1552]: time="2024-08-05T22:01:51.450430074Z" level=info msg="CreateContainer within sandbox \"7255bb2e49450ce566eb64f192db8c01c91d4b4a640a49368f3e898799d1afa3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a423262d850a70b91424d5101a5e3a092af07e93020d5adac4278e0a9e9b35d\"" Aug 5 22:01:51.450822 containerd[1552]: time="2024-08-05T22:01:51.450792713Z" level=info msg="StartContainer for \"3a423262d850a70b91424d5101a5e3a092af07e93020d5adac4278e0a9e9b35d\"" Aug 5 22:01:51.502652 containerd[1552]: time="2024-08-05T22:01:51.502543605Z" level=info msg="StartContainer for \"3a423262d850a70b91424d5101a5e3a092af07e93020d5adac4278e0a9e9b35d\" returns successfully" Aug 5 22:01:51.502652 containerd[1552]: time="2024-08-05T22:01:51.502607292Z" level=info msg="StartContainer for \"fd4ef66ff1a485fd5f4a716d1ccdbab154130f834ce369f8672c82dd92f57922\" returns successfully" Aug 5 22:01:51.731841 kubelet[2668]: E0805 22:01:51.731798 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:51.737723 kubelet[2668]: E0805 22:01:51.737686 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:51.755556 kubelet[2668]: I0805 22:01:51.755511 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5b94x" podStartSLOduration=14.75547672 podCreationTimestamp="2024-08-05 22:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:01:51.741419567 +0000 UTC m=+29.203354554" watchObservedRunningTime="2024-08-05 22:01:51.75547672 +0000 UTC m=+29.217411747" Aug 5 22:01:51.756170 kubelet[2668]: I0805 22:01:51.756133 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vkxc5" podStartSLOduration=14.756108068 podCreationTimestamp="2024-08-05 22:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:01:51.754026164 +0000 UTC m=+29.215961231" watchObservedRunningTime="2024-08-05 22:01:51.756108068 +0000 UTC m=+29.218043055" Aug 5 22:01:52.740229 kubelet[2668]: E0805 22:01:52.739904 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:52.740229 kubelet[2668]: E0805 22:01:52.739923 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:53.741085 kubelet[2668]: E0805 22:01:53.740704 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:53.741085 kubelet[2668]: E0805 22:01:53.740781 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:01:58.964158 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:56336.service - OpenSSH per-connection server daemon (10.0.0.1:56336). Aug 5 22:01:58.997271 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 56336 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:01:58.998669 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:01:59.002930 systemd-logind[1535]: New session 8 of user core. Aug 5 22:01:59.010158 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:01:59.142286 sshd[4078]: pam_unix(sshd:session): session closed for user core Aug 5 22:01:59.145143 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:56336.service: Deactivated successfully. Aug 5 22:01:59.151829 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:01:59.153377 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:01:59.154405 systemd-logind[1535]: Removed session 8. Aug 5 22:02:04.159092 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:43536.service - OpenSSH per-connection server daemon (10.0.0.1:43536). Aug 5 22:02:04.187343 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 43536 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:04.188574 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:04.191999 systemd-logind[1535]: New session 9 of user core. Aug 5 22:02:04.202139 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:02:04.311435 sshd[4095]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:04.314487 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:43536.service: Deactivated successfully. Aug 5 22:02:04.317052 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:02:04.317071 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:02:04.318891 systemd-logind[1535]: Removed session 9. Aug 5 22:02:09.326080 systemd[1]: Started sshd@9-10.0.0.149:22-10.0.0.1:43550.service - OpenSSH per-connection server daemon (10.0.0.1:43550). Aug 5 22:02:09.355750 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 43550 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:09.357106 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:09.362216 systemd-logind[1535]: New session 10 of user core. Aug 5 22:02:09.378232 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:02:09.485928 sshd[4113]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:09.489540 systemd[1]: sshd@9-10.0.0.149:22-10.0.0.1:43550.service: Deactivated successfully. Aug 5 22:02:09.491624 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:02:09.492180 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:02:09.493091 systemd-logind[1535]: Removed session 10. Aug 5 22:02:14.498099 systemd[1]: Started sshd@10-10.0.0.149:22-10.0.0.1:51386.service - OpenSSH per-connection server daemon (10.0.0.1:51386). Aug 5 22:02:14.527143 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 51386 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:14.528371 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:14.532283 systemd-logind[1535]: New session 11 of user core. Aug 5 22:02:14.543091 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:02:14.652171 sshd[4129]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:14.655346 systemd[1]: sshd@10-10.0.0.149:22-10.0.0.1:51386.service: Deactivated successfully. Aug 5 22:02:14.658129 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:02:14.659098 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:02:14.668152 systemd[1]: Started sshd@11-10.0.0.149:22-10.0.0.1:51388.service - OpenSSH per-connection server daemon (10.0.0.1:51388). Aug 5 22:02:14.669382 systemd-logind[1535]: Removed session 11. Aug 5 22:02:14.699576 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 51388 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:14.700728 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:14.704922 systemd-logind[1535]: New session 12 of user core. Aug 5 22:02:14.719142 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:02:15.406452 sshd[4146]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:15.419408 systemd[1]: Started sshd@12-10.0.0.149:22-10.0.0.1:51398.service - OpenSSH per-connection server daemon (10.0.0.1:51398). Aug 5 22:02:15.422110 systemd[1]: sshd@11-10.0.0.149:22-10.0.0.1:51388.service: Deactivated successfully. Aug 5 22:02:15.429358 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:02:15.437388 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:02:15.438745 systemd-logind[1535]: Removed session 12. Aug 5 22:02:15.467768 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 51398 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:15.468265 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:15.476552 systemd-logind[1535]: New session 13 of user core. Aug 5 22:02:15.482098 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:02:15.594813 sshd[4156]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:15.598456 systemd[1]: sshd@12-10.0.0.149:22-10.0.0.1:51398.service: Deactivated successfully. Aug 5 22:02:15.600570 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:02:15.601040 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:02:15.602312 systemd-logind[1535]: Removed session 13. Aug 5 22:02:20.608066 systemd[1]: Started sshd@13-10.0.0.149:22-10.0.0.1:51402.service - OpenSSH per-connection server daemon (10.0.0.1:51402). Aug 5 22:02:20.637443 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 51402 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:20.638589 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:20.644877 systemd-logind[1535]: New session 14 of user core. Aug 5 22:02:20.663079 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:02:20.766775 sshd[4175]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:20.787111 systemd[1]: Started sshd@14-10.0.0.149:22-10.0.0.1:51406.service - OpenSSH per-connection server daemon (10.0.0.1:51406). Aug 5 22:02:20.787505 systemd[1]: sshd@13-10.0.0.149:22-10.0.0.1:51402.service: Deactivated successfully. Aug 5 22:02:20.790715 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:02:20.792599 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:02:20.793624 systemd-logind[1535]: Removed session 14. Aug 5 22:02:20.815176 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 51406 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:20.816297 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:20.820062 systemd-logind[1535]: New session 15 of user core. Aug 5 22:02:20.830074 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:02:21.022122 sshd[4187]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:21.030082 systemd[1]: Started sshd@15-10.0.0.149:22-10.0.0.1:51414.service - OpenSSH per-connection server daemon (10.0.0.1:51414). Aug 5 22:02:21.030438 systemd[1]: sshd@14-10.0.0.149:22-10.0.0.1:51406.service: Deactivated successfully. Aug 5 22:02:21.033908 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:02:21.034036 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:02:21.035275 systemd-logind[1535]: Removed session 15. Aug 5 22:02:21.061745 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 51414 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:21.062914 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:21.066795 systemd-logind[1535]: New session 16 of user core. Aug 5 22:02:21.076057 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:02:21.811277 sshd[4200]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:21.820362 systemd[1]: Started sshd@16-10.0.0.149:22-10.0.0.1:51426.service - OpenSSH per-connection server daemon (10.0.0.1:51426). Aug 5 22:02:21.821022 systemd[1]: sshd@15-10.0.0.149:22-10.0.0.1:51414.service: Deactivated successfully. Aug 5 22:02:21.828705 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:02:21.829561 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:02:21.832714 systemd-logind[1535]: Removed session 16. Aug 5 22:02:21.862015 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 51426 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:21.863304 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:21.867136 systemd-logind[1535]: New session 17 of user core. Aug 5 22:02:21.885108 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:02:22.154059 sshd[4219]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:22.166140 systemd[1]: Started sshd@17-10.0.0.149:22-10.0.0.1:37298.service - OpenSSH per-connection server daemon (10.0.0.1:37298). Aug 5 22:02:22.166938 systemd[1]: sshd@16-10.0.0.149:22-10.0.0.1:51426.service: Deactivated successfully. Aug 5 22:02:22.168822 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:02:22.169625 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:02:22.171263 systemd-logind[1535]: Removed session 17. Aug 5 22:02:22.198406 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 37298 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:22.199613 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:22.203821 systemd-logind[1535]: New session 18 of user core. Aug 5 22:02:22.217156 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:02:22.325677 sshd[4234]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:22.328757 systemd[1]: sshd@17-10.0.0.149:22-10.0.0.1:37298.service: Deactivated successfully. Aug 5 22:02:22.330730 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:02:22.330786 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:02:22.331800 systemd-logind[1535]: Removed session 18. Aug 5 22:02:27.340132 systemd[1]: Started sshd@18-10.0.0.149:22-10.0.0.1:37310.service - OpenSSH per-connection server daemon (10.0.0.1:37310). Aug 5 22:02:27.373718 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 37310 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:27.375089 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:27.380248 systemd-logind[1535]: New session 19 of user core. Aug 5 22:02:27.397132 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:02:27.518831 sshd[4257]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:27.521705 systemd[1]: sshd@18-10.0.0.149:22-10.0.0.1:37310.service: Deactivated successfully. Aug 5 22:02:27.524550 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:02:27.525113 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:02:27.526271 systemd-logind[1535]: Removed session 19. Aug 5 22:02:29.640523 kubelet[2668]: E0805 22:02:29.640475 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:32.532116 systemd[1]: Started sshd@19-10.0.0.149:22-10.0.0.1:47422.service - OpenSSH per-connection server daemon (10.0.0.1:47422). Aug 5 22:02:32.560316 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 47422 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:32.561551 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:32.565048 systemd-logind[1535]: New session 20 of user core. Aug 5 22:02:32.573124 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:02:32.678474 sshd[4273]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:32.681851 systemd[1]: sshd@19-10.0.0.149:22-10.0.0.1:47422.service: Deactivated successfully. Aug 5 22:02:32.687503 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:02:32.687581 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:02:32.689027 systemd-logind[1535]: Removed session 20. Aug 5 22:02:33.639715 kubelet[2668]: E0805 22:02:33.639640 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:37.694161 systemd[1]: Started sshd@20-10.0.0.149:22-10.0.0.1:47428.service - OpenSSH per-connection server daemon (10.0.0.1:47428). Aug 5 22:02:37.724718 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 47428 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:37.726445 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:37.731099 systemd-logind[1535]: New session 21 of user core. Aug 5 22:02:37.743181 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:02:37.850205 sshd[4290]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:37.861143 systemd[1]: Started sshd@21-10.0.0.149:22-10.0.0.1:47430.service - OpenSSH per-connection server daemon (10.0.0.1:47430). Aug 5 22:02:37.861533 systemd[1]: sshd@20-10.0.0.149:22-10.0.0.1:47428.service: Deactivated successfully. Aug 5 22:02:37.864625 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:02:37.864761 systemd-logind[1535]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:02:37.866423 systemd-logind[1535]: Removed session 21. Aug 5 22:02:37.890788 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 47430 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:37.892152 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:37.896361 systemd-logind[1535]: New session 22 of user core. Aug 5 22:02:37.908228 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:02:39.668279 containerd[1552]: time="2024-08-05T22:02:39.668209970Z" level=info msg="StopContainer for \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\" with timeout 30 (s)" Aug 5 22:02:39.691225 containerd[1552]: time="2024-08-05T22:02:39.690214663Z" level=info msg="Stop container \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\" with signal terminated" Aug 5 22:02:39.707380 containerd[1552]: time="2024-08-05T22:02:39.707323594Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:02:39.717429 containerd[1552]: time="2024-08-05T22:02:39.717311637Z" level=info msg="StopContainer for \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\" with timeout 2 (s)" Aug 5 22:02:39.717758 containerd[1552]: time="2024-08-05T22:02:39.717727750Z" level=info msg="Stop container \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\" with signal terminated" Aug 5 22:02:39.724381 systemd-networkd[1234]: lxc_health: Link DOWN Aug 5 22:02:39.724388 systemd-networkd[1234]: lxc_health: Lost carrier Aug 5 22:02:39.737573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60-rootfs.mount: Deactivated successfully. Aug 5 22:02:39.750631 containerd[1552]: time="2024-08-05T22:02:39.750461035Z" level=info msg="shim disconnected" id=bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60 namespace=k8s.io Aug 5 22:02:39.750631 containerd[1552]: time="2024-08-05T22:02:39.750519474Z" level=warning msg="cleaning up after shim disconnected" id=bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60 namespace=k8s.io Aug 5 22:02:39.750631 containerd[1552]: time="2024-08-05T22:02:39.750529954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:02:39.770192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092-rootfs.mount: Deactivated successfully. Aug 5 22:02:39.775274 containerd[1552]: time="2024-08-05T22:02:39.775235085Z" level=info msg="StopContainer for \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\" returns successfully" Aug 5 22:02:39.775614 containerd[1552]: time="2024-08-05T22:02:39.775542800Z" level=info msg="shim disconnected" id=0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092 namespace=k8s.io Aug 5 22:02:39.775614 containerd[1552]: time="2024-08-05T22:02:39.775585560Z" level=warning msg="cleaning up after shim disconnected" id=0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092 namespace=k8s.io Aug 5 22:02:39.775614 containerd[1552]: time="2024-08-05T22:02:39.775596600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:02:39.775930 containerd[1552]: time="2024-08-05T22:02:39.775907115Z" level=info msg="StopPodSandbox for \"cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532\"" Aug 5 22:02:39.778666 containerd[1552]: time="2024-08-05T22:02:39.775951434Z" level=info msg="Container to stop \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:02:39.781671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532-shm.mount: Deactivated successfully. Aug 5 22:02:39.792436 containerd[1552]: time="2024-08-05T22:02:39.792325656Z" level=info msg="StopContainer for \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\" returns successfully" Aug 5 22:02:39.793035 containerd[1552]: time="2024-08-05T22:02:39.792980646Z" level=info msg="StopPodSandbox for \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\"" Aug 5 22:02:39.793112 containerd[1552]: time="2024-08-05T22:02:39.793021805Z" level=info msg="Container to stop \"9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:02:39.793112 containerd[1552]: time="2024-08-05T22:02:39.793056685Z" level=info msg="Container to stop \"3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:02:39.793112 containerd[1552]: time="2024-08-05T22:02:39.793066125Z" level=info msg="Container to stop \"9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:02:39.793112 containerd[1552]: time="2024-08-05T22:02:39.793075285Z" level=info msg="Container to stop \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:02:39.793112 containerd[1552]: time="2024-08-05T22:02:39.793084284Z" level=info msg="Container to stop \"92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:02:39.794921 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c-shm.mount: Deactivated successfully. Aug 5 22:02:39.815445 containerd[1552]: time="2024-08-05T22:02:39.815240496Z" level=info msg="shim disconnected" id=cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532 namespace=k8s.io Aug 5 22:02:39.815445 containerd[1552]: time="2024-08-05T22:02:39.815299575Z" level=warning msg="cleaning up after shim disconnected" id=cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532 namespace=k8s.io Aug 5 22:02:39.815445 containerd[1552]: time="2024-08-05T22:02:39.815307975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:02:39.827978 containerd[1552]: time="2024-08-05T22:02:39.827896337Z" level=info msg="shim disconnected" id=fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c namespace=k8s.io Aug 5 22:02:39.827978 containerd[1552]: time="2024-08-05T22:02:39.827954736Z" level=warning msg="cleaning up after shim disconnected" id=fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c namespace=k8s.io Aug 5 22:02:39.827978 containerd[1552]: time="2024-08-05T22:02:39.827964535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:02:39.838261 containerd[1552]: time="2024-08-05T22:02:39.838173375Z" level=info msg="TearDown network for sandbox \"cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532\" successfully" Aug 5 22:02:39.838261 containerd[1552]: time="2024-08-05T22:02:39.838210974Z" level=info msg="StopPodSandbox for \"cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532\" returns successfully" Aug 5 22:02:39.842247 containerd[1552]: time="2024-08-05T22:02:39.841932756Z" level=info msg="TearDown network for sandbox \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" successfully" Aug 5 22:02:39.842247 containerd[1552]: time="2024-08-05T22:02:39.841970515Z" level=info msg="StopPodSandbox for \"fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c\" returns successfully" Aug 5 22:02:39.860167 kubelet[2668]: I0805 22:02:39.860133 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-host-proc-sys-kernel\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860167 kubelet[2668]: I0805 22:02:39.860175 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-lib-modules\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860585 kubelet[2668]: I0805 22:02:39.860200 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw9v6\" (UniqueName: \"kubernetes.io/projected/51f38ed6-9515-4a6b-94c2-ffd99f45174c-kube-api-access-lw9v6\") pod \"51f38ed6-9515-4a6b-94c2-ffd99f45174c\" (UID: \"51f38ed6-9515-4a6b-94c2-ffd99f45174c\") " Aug 5 22:02:39.860585 kubelet[2668]: I0805 22:02:39.860222 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-host-proc-sys-net\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860585 kubelet[2668]: I0805 22:02:39.860239 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-xtables-lock\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860585 kubelet[2668]: I0805 22:02:39.860232 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.860585 kubelet[2668]: I0805 22:02:39.860257 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-bpf-maps\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860585 kubelet[2668]: I0805 22:02:39.860274 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-run\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860717 kubelet[2668]: I0805 22:02:39.860290 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-hostproc\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860717 kubelet[2668]: I0805 22:02:39.860287 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.860717 kubelet[2668]: I0805 22:02:39.860306 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.860717 kubelet[2668]: I0805 22:02:39.860310 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-config-path\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860717 kubelet[2668]: I0805 22:02:39.860349 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fsc7\" (UniqueName: \"kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-kube-api-access-8fsc7\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860848 kubelet[2668]: I0805 22:02:39.860371 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-cgroup\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860848 kubelet[2668]: I0805 22:02:39.860393 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51f38ed6-9515-4a6b-94c2-ffd99f45174c-cilium-config-path\") pod \"51f38ed6-9515-4a6b-94c2-ffd99f45174c\" (UID: \"51f38ed6-9515-4a6b-94c2-ffd99f45174c\") " Aug 5 22:02:39.860848 kubelet[2668]: I0805 22:02:39.860411 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-etc-cni-netd\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860848 kubelet[2668]: I0805 22:02:39.860432 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f653ea32-a506-448a-9bb9-9e5c58285393-clustermesh-secrets\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860848 kubelet[2668]: I0805 22:02:39.860449 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cni-path\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860848 kubelet[2668]: I0805 22:02:39.860467 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-hubble-tls\") pod \"f653ea32-a506-448a-9bb9-9e5c58285393\" (UID: \"f653ea32-a506-448a-9bb9-9e5c58285393\") " Aug 5 22:02:39.860993 kubelet[2668]: I0805 22:02:39.860501 2668 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.860993 kubelet[2668]: I0805 22:02:39.860513 2668 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.860993 kubelet[2668]: I0805 22:02:39.860525 2668 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.862213 kubelet[2668]: I0805 22:02:39.861102 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.862213 kubelet[2668]: I0805 22:02:39.861145 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.862213 kubelet[2668]: I0805 22:02:39.861167 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.862213 kubelet[2668]: I0805 22:02:39.861993 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.862434 kubelet[2668]: I0805 22:02:39.862226 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cni-path" (OuterVolumeSpecName: "cni-path") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.862514 kubelet[2668]: I0805 22:02:39.862488 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-hostproc" (OuterVolumeSpecName: "hostproc") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.862808 kubelet[2668]: I0805 22:02:39.862784 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:02:39.862998 kubelet[2668]: I0805 22:02:39.862974 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:02:39.863607 kubelet[2668]: I0805 22:02:39.863566 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:02:39.866308 kubelet[2668]: I0805 22:02:39.866261 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f38ed6-9515-4a6b-94c2-ffd99f45174c-kube-api-access-lw9v6" (OuterVolumeSpecName: "kube-api-access-lw9v6") pod "51f38ed6-9515-4a6b-94c2-ffd99f45174c" (UID: "51f38ed6-9515-4a6b-94c2-ffd99f45174c"). InnerVolumeSpecName "kube-api-access-lw9v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:02:39.866930 kubelet[2668]: I0805 22:02:39.866849 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-kube-api-access-8fsc7" (OuterVolumeSpecName: "kube-api-access-8fsc7") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "kube-api-access-8fsc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:02:39.867319 kubelet[2668]: I0805 22:02:39.867296 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51f38ed6-9515-4a6b-94c2-ffd99f45174c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "51f38ed6-9515-4a6b-94c2-ffd99f45174c" (UID: "51f38ed6-9515-4a6b-94c2-ffd99f45174c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:02:39.867737 kubelet[2668]: I0805 22:02:39.867708 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f653ea32-a506-448a-9bb9-9e5c58285393-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f653ea32-a506-448a-9bb9-9e5c58285393" (UID: "f653ea32-a506-448a-9bb9-9e5c58285393"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:02:39.961130 kubelet[2668]: I0805 22:02:39.961015 2668 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lw9v6\" (UniqueName: \"kubernetes.io/projected/51f38ed6-9515-4a6b-94c2-ffd99f45174c-kube-api-access-lw9v6\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.961130 kubelet[2668]: I0805 22:02:39.961055 2668 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.961130 kubelet[2668]: I0805 22:02:39.961068 2668 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.961130 kubelet[2668]: I0805 22:02:39.961079 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.961130 kubelet[2668]: I0805 22:02:39.961088 2668 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.961130 kubelet[2668]: I0805 22:02:39.961097 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.961130 kubelet[2668]: I0805 22:02:39.961108 2668 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8fsc7\" (UniqueName: \"kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-kube-api-access-8fsc7\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.961130 kubelet[2668]: I0805 22:02:39.961116 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.962015 kubelet[2668]: I0805 22:02:39.961873 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51f38ed6-9515-4a6b-94c2-ffd99f45174c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.962015 kubelet[2668]: I0805 22:02:39.961898 2668 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.962015 kubelet[2668]: I0805 22:02:39.961908 2668 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f653ea32-a506-448a-9bb9-9e5c58285393-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.962015 kubelet[2668]: I0805 22:02:39.961917 2668 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f653ea32-a506-448a-9bb9-9e5c58285393-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:39.962015 kubelet[2668]: I0805 22:02:39.961927 2668 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f653ea32-a506-448a-9bb9-9e5c58285393-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 5 22:02:40.694626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc6d9438c117ae9dc9933ee07982d14a54ad2e1cfba29f076213728abc8ca532-rootfs.mount: Deactivated successfully. Aug 5 22:02:40.694791 systemd[1]: var-lib-kubelet-pods-51f38ed6\x2d9515\x2d4a6b\x2d94c2\x2dffd99f45174c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlw9v6.mount: Deactivated successfully. Aug 5 22:02:40.694899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdf4dc3ea8312f821a735b55921efd1597e12ea7a796ca484e742a036ba4d63c-rootfs.mount: Deactivated successfully. Aug 5 22:02:40.694986 systemd[1]: var-lib-kubelet-pods-f653ea32\x2da506\x2d448a\x2d9bb9\x2d9e5c58285393-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fsc7.mount: Deactivated successfully. Aug 5 22:02:40.695069 systemd[1]: var-lib-kubelet-pods-f653ea32\x2da506\x2d448a\x2d9bb9\x2d9e5c58285393-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 5 22:02:40.695152 systemd[1]: var-lib-kubelet-pods-f653ea32\x2da506\x2d448a\x2d9bb9\x2d9e5c58285393-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 5 22:02:40.837273 kubelet[2668]: I0805 22:02:40.836913 2668 scope.go:117] "RemoveContainer" containerID="bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60" Aug 5 22:02:40.839138 containerd[1552]: time="2024-08-05T22:02:40.839094462Z" level=info msg="RemoveContainer for \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\"" Aug 5 22:02:40.845108 containerd[1552]: time="2024-08-05T22:02:40.845070058Z" level=info msg="RemoveContainer for \"bacc6660a09914d2101092c0bfda428e5f62e20d195a223e3216e11209045b60\" returns successfully" Aug 5 22:02:40.845364 kubelet[2668]: I0805 22:02:40.845307 2668 scope.go:117] "RemoveContainer" containerID="0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092" Aug 5 22:02:40.847781 containerd[1552]: time="2024-08-05T22:02:40.847422026Z" level=info msg="RemoveContainer for \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\"" Aug 5 22:02:40.849928 containerd[1552]: time="2024-08-05T22:02:40.849896231Z" level=info msg="RemoveContainer for \"0ce7b71a1929aaadefa7f339843494ad2968178e8c4d4a570a7c0e5ccb04d092\" returns successfully" Aug 5 22:02:40.850179 kubelet[2668]: I0805 22:02:40.850033 2668 scope.go:117] "RemoveContainer" containerID="92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177" Aug 5 22:02:40.851063 containerd[1552]: time="2024-08-05T22:02:40.851029535Z" level=info msg="RemoveContainer for \"92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177\"" Aug 5 22:02:40.855263 containerd[1552]: time="2024-08-05T22:02:40.855214236Z" level=info msg="RemoveContainer for \"92f914cf69f55157da7946ce993bac112252951c16d1196db0a8f8741ca7d177\" returns successfully" Aug 5 22:02:40.855645 kubelet[2668]: I0805 22:02:40.855617 2668 scope.go:117] "RemoveContainer" containerID="9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592" Aug 5 22:02:40.858199 containerd[1552]: time="2024-08-05T22:02:40.857959518Z" level=info msg="RemoveContainer for \"9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592\"" Aug 5 22:02:40.860220 containerd[1552]: time="2024-08-05T22:02:40.860191207Z" level=info msg="RemoveContainer for \"9dea1839eb3da474be99adc00ee61e9f6148ac2ac31c6f7bafe429fc69516592\" returns successfully" Aug 5 22:02:40.860460 kubelet[2668]: I0805 22:02:40.860436 2668 scope.go:117] "RemoveContainer" containerID="3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145" Aug 5 22:02:40.861594 containerd[1552]: time="2024-08-05T22:02:40.861371870Z" level=info msg="RemoveContainer for \"3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145\"" Aug 5 22:02:40.863454 containerd[1552]: time="2024-08-05T22:02:40.863372562Z" level=info msg="RemoveContainer for \"3bd94a84be931cf8fc875e5b0d67c69d9dac15c14d1afa1c8b2197aadb76d145\" returns successfully" Aug 5 22:02:40.863574 kubelet[2668]: I0805 22:02:40.863562 2668 scope.go:117] "RemoveContainer" containerID="9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e" Aug 5 22:02:40.864499 containerd[1552]: time="2024-08-05T22:02:40.864407587Z" level=info msg="RemoveContainer for \"9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e\"" Aug 5 22:02:40.866665 containerd[1552]: time="2024-08-05T22:02:40.866629476Z" level=info msg="RemoveContainer for \"9e53d1d94203b0067a5015e2e09642bbc3f853d4a839596c878ad32a3e09782e\" returns successfully" Aug 5 22:02:41.618456 sshd[4302]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:41.627143 systemd[1]: Started sshd@22-10.0.0.149:22-10.0.0.1:47446.service - OpenSSH per-connection server daemon (10.0.0.1:47446). Aug 5 22:02:41.627565 systemd[1]: sshd@21-10.0.0.149:22-10.0.0.1:47430.service: Deactivated successfully. Aug 5 22:02:41.630800 systemd-logind[1535]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:02:41.631602 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:02:41.633342 systemd-logind[1535]: Removed session 22. Aug 5 22:02:41.657147 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 47446 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:41.658565 sshd[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:41.662443 systemd-logind[1535]: New session 23 of user core. Aug 5 22:02:41.671127 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:02:42.148248 sshd[4470]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:42.161119 kubelet[2668]: I0805 22:02:42.160758 2668 topology_manager.go:215] "Topology Admit Handler" podUID="b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf" podNamespace="kube-system" podName="cilium-qfjgw" Aug 5 22:02:42.161119 kubelet[2668]: E0805 22:02:42.160815 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f653ea32-a506-448a-9bb9-9e5c58285393" containerName="apply-sysctl-overwrites" Aug 5 22:02:42.161119 kubelet[2668]: E0805 22:02:42.160826 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f653ea32-a506-448a-9bb9-9e5c58285393" containerName="mount-bpf-fs" Aug 5 22:02:42.161119 kubelet[2668]: E0805 22:02:42.160834 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f653ea32-a506-448a-9bb9-9e5c58285393" containerName="clean-cilium-state" Aug 5 22:02:42.161119 kubelet[2668]: E0805 22:02:42.160841 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f653ea32-a506-448a-9bb9-9e5c58285393" containerName="mount-cgroup" Aug 5 22:02:42.161119 kubelet[2668]: E0805 22:02:42.160848 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f653ea32-a506-448a-9bb9-9e5c58285393" containerName="cilium-agent" Aug 5 22:02:42.161119 kubelet[2668]: E0805 22:02:42.160878 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51f38ed6-9515-4a6b-94c2-ffd99f45174c" containerName="cilium-operator" Aug 5 22:02:42.161119 kubelet[2668]: I0805 22:02:42.160903 2668 memory_manager.go:346] "RemoveStaleState removing state" podUID="f653ea32-a506-448a-9bb9-9e5c58285393" containerName="cilium-agent" Aug 5 22:02:42.161119 kubelet[2668]: I0805 22:02:42.160911 2668 memory_manager.go:346] "RemoveStaleState removing state" podUID="51f38ed6-9515-4a6b-94c2-ffd99f45174c" containerName="cilium-operator" Aug 5 22:02:42.164883 systemd[1]: Started sshd@23-10.0.0.149:22-10.0.0.1:55778.service - OpenSSH per-connection server daemon (10.0.0.1:55778). Aug 5 22:02:42.165300 systemd[1]: sshd@22-10.0.0.149:22-10.0.0.1:47446.service: Deactivated successfully. Aug 5 22:02:42.184141 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:02:42.191348 systemd-logind[1535]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:02:42.192534 systemd-logind[1535]: Removed session 23. Aug 5 22:02:42.220174 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 55778 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:42.221642 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:42.225667 systemd-logind[1535]: New session 24 of user core. Aug 5 22:02:42.237193 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:02:42.274405 kubelet[2668]: I0805 22:02:42.274366 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-bpf-maps\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274405 kubelet[2668]: I0805 22:02:42.274416 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-etc-cni-netd\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274835 kubelet[2668]: I0805 22:02:42.274442 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-lib-modules\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274835 kubelet[2668]: I0805 22:02:42.274505 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-host-proc-sys-net\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274835 kubelet[2668]: I0805 22:02:42.274541 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-hostproc\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274835 kubelet[2668]: I0805 22:02:42.274564 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-xtables-lock\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274835 kubelet[2668]: I0805 22:02:42.274620 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-cilium-run\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274835 kubelet[2668]: I0805 22:02:42.274644 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxg64\" (UniqueName: \"kubernetes.io/projected/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-kube-api-access-fxg64\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274997 kubelet[2668]: I0805 22:02:42.274679 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-cilium-cgroup\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274997 kubelet[2668]: I0805 22:02:42.274720 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-clustermesh-secrets\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274997 kubelet[2668]: I0805 22:02:42.274749 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-host-proc-sys-kernel\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274997 kubelet[2668]: I0805 22:02:42.274796 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-cni-path\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274997 kubelet[2668]: I0805 22:02:42.274820 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-cilium-ipsec-secrets\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.274997 kubelet[2668]: I0805 22:02:42.274876 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-hubble-tls\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.275128 kubelet[2668]: I0805 22:02:42.274898 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf-cilium-config-path\") pod \"cilium-qfjgw\" (UID: \"b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf\") " pod="kube-system/cilium-qfjgw" Aug 5 22:02:42.287349 sshd[4484]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:42.301120 systemd[1]: Started sshd@24-10.0.0.149:22-10.0.0.1:55786.service - OpenSSH per-connection server daemon (10.0.0.1:55786). Aug 5 22:02:42.301524 systemd[1]: sshd@23-10.0.0.149:22-10.0.0.1:55778.service: Deactivated successfully. Aug 5 22:02:42.304720 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:02:42.306410 systemd-logind[1535]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:02:42.307307 systemd-logind[1535]: Removed session 24. Aug 5 22:02:42.330093 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 55786 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 22:02:42.331500 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:02:42.335340 systemd-logind[1535]: New session 25 of user core. Aug 5 22:02:42.343129 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:02:42.474703 kubelet[2668]: E0805 22:02:42.474670 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:42.475349 containerd[1552]: time="2024-08-05T22:02:42.475258984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qfjgw,Uid:b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf,Namespace:kube-system,Attempt:0,}" Aug 5 22:02:42.495483 containerd[1552]: time="2024-08-05T22:02:42.495252409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:02:42.495483 containerd[1552]: time="2024-08-05T22:02:42.495315648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:02:42.495483 containerd[1552]: time="2024-08-05T22:02:42.495343448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:02:42.495483 containerd[1552]: time="2024-08-05T22:02:42.495360088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:02:42.529912 containerd[1552]: time="2024-08-05T22:02:42.529872637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qfjgw,Uid:b81dcdd3-c2b0-4b7d-b36d-302dca0f04cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\"" Aug 5 22:02:42.530717 kubelet[2668]: E0805 22:02:42.530696 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:42.536511 containerd[1552]: time="2024-08-05T22:02:42.536470526Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 5 22:02:42.546264 containerd[1552]: time="2024-08-05T22:02:42.546143622Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ebec9bd13dd4f0eabae60dcc75903867d677fb019a87aef1bbf554ad64801820\"" Aug 5 22:02:42.547727 containerd[1552]: time="2024-08-05T22:02:42.546927294Z" level=info msg="StartContainer for \"ebec9bd13dd4f0eabae60dcc75903867d677fb019a87aef1bbf554ad64801820\"" Aug 5 22:02:42.600617 containerd[1552]: time="2024-08-05T22:02:42.600546958Z" level=info msg="StartContainer for \"ebec9bd13dd4f0eabae60dcc75903867d677fb019a87aef1bbf554ad64801820\" returns successfully" Aug 5 22:02:42.641889 containerd[1552]: time="2024-08-05T22:02:42.641766355Z" level=info msg="shim disconnected" id=ebec9bd13dd4f0eabae60dcc75903867d677fb019a87aef1bbf554ad64801820 namespace=k8s.io Aug 5 22:02:42.641889 containerd[1552]: time="2024-08-05T22:02:42.641816675Z" level=warning msg="cleaning up after shim disconnected" id=ebec9bd13dd4f0eabae60dcc75903867d677fb019a87aef1bbf554ad64801820 namespace=k8s.io Aug 5 22:02:42.641889 containerd[1552]: time="2024-08-05T22:02:42.641825675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:02:42.642558 kubelet[2668]: I0805 22:02:42.642366 2668 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="51f38ed6-9515-4a6b-94c2-ffd99f45174c" path="/var/lib/kubelet/pods/51f38ed6-9515-4a6b-94c2-ffd99f45174c/volumes" Aug 5 22:02:42.642808 kubelet[2668]: I0805 22:02:42.642791 2668 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f653ea32-a506-448a-9bb9-9e5c58285393" path="/var/lib/kubelet/pods/f653ea32-a506-448a-9bb9-9e5c58285393/volumes" Aug 5 22:02:42.719879 kubelet[2668]: E0805 22:02:42.719801 2668 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 5 22:02:42.850638 kubelet[2668]: E0805 22:02:42.850485 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:42.854101 containerd[1552]: time="2024-08-05T22:02:42.854044235Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 5 22:02:42.867736 containerd[1552]: time="2024-08-05T22:02:42.866919857Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6be2811792f1bf532fed20977c3fd9f5a98fc0a3e814ec7684b7ace156cf16e5\"" Aug 5 22:02:42.868913 containerd[1552]: time="2024-08-05T22:02:42.868034485Z" level=info msg="StartContainer for \"6be2811792f1bf532fed20977c3fd9f5a98fc0a3e814ec7684b7ace156cf16e5\"" Aug 5 22:02:42.913588 containerd[1552]: time="2024-08-05T22:02:42.913408278Z" level=info msg="StartContainer for \"6be2811792f1bf532fed20977c3fd9f5a98fc0a3e814ec7684b7ace156cf16e5\" returns successfully" Aug 5 22:02:42.942314 containerd[1552]: time="2024-08-05T22:02:42.942245608Z" level=info msg="shim disconnected" id=6be2811792f1bf532fed20977c3fd9f5a98fc0a3e814ec7684b7ace156cf16e5 namespace=k8s.io Aug 5 22:02:42.942314 containerd[1552]: time="2024-08-05T22:02:42.942303168Z" level=warning msg="cleaning up after shim disconnected" id=6be2811792f1bf532fed20977c3fd9f5a98fc0a3e814ec7684b7ace156cf16e5 namespace=k8s.io Aug 5 22:02:42.942314 containerd[1552]: time="2024-08-05T22:02:42.942323127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:02:43.853711 kubelet[2668]: E0805 22:02:43.853301 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:43.864905 containerd[1552]: time="2024-08-05T22:02:43.862895867Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 5 22:02:43.877569 containerd[1552]: time="2024-08-05T22:02:43.877461013Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22ca0141707d1b5c04db1fbf2ac29ea9971a6cfbc6a499699d8b314908fcdef0\"" Aug 5 22:02:43.879916 containerd[1552]: time="2024-08-05T22:02:43.878312165Z" level=info msg="StartContainer for \"22ca0141707d1b5c04db1fbf2ac29ea9971a6cfbc6a499699d8b314908fcdef0\"" Aug 5 22:02:43.938948 containerd[1552]: time="2024-08-05T22:02:43.938908089Z" level=info msg="StartContainer for \"22ca0141707d1b5c04db1fbf2ac29ea9971a6cfbc6a499699d8b314908fcdef0\" returns successfully" Aug 5 22:02:43.962308 containerd[1552]: time="2024-08-05T22:02:43.962243555Z" level=info msg="shim disconnected" id=22ca0141707d1b5c04db1fbf2ac29ea9971a6cfbc6a499699d8b314908fcdef0 namespace=k8s.io Aug 5 22:02:43.962528 containerd[1552]: time="2024-08-05T22:02:43.962512312Z" level=warning msg="cleaning up after shim disconnected" id=22ca0141707d1b5c04db1fbf2ac29ea9971a6cfbc6a499699d8b314908fcdef0 namespace=k8s.io Aug 5 22:02:43.962587 containerd[1552]: time="2024-08-05T22:02:43.962574552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:02:44.381326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22ca0141707d1b5c04db1fbf2ac29ea9971a6cfbc6a499699d8b314908fcdef0-rootfs.mount: Deactivated successfully. Aug 5 22:02:44.640026 kubelet[2668]: E0805 22:02:44.639561 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:44.738113 kubelet[2668]: I0805 22:02:44.738085 2668 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-08-05T22:02:44Z","lastTransitionTime":"2024-08-05T22:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 5 22:02:44.857194 kubelet[2668]: E0805 22:02:44.857164 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:44.861184 containerd[1552]: time="2024-08-05T22:02:44.861141766Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 5 22:02:44.875358 containerd[1552]: time="2024-08-05T22:02:44.875272858Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f475cf9688b5c69db3c957b10fc747b8894abde5e84a9ac784901da3f625c5a7\"" Aug 5 22:02:44.882970 containerd[1552]: time="2024-08-05T22:02:44.880648497Z" level=info msg="StartContainer for \"f475cf9688b5c69db3c957b10fc747b8894abde5e84a9ac784901da3f625c5a7\"" Aug 5 22:02:44.925892 containerd[1552]: time="2024-08-05T22:02:44.925611232Z" level=info msg="StartContainer for \"f475cf9688b5c69db3c957b10fc747b8894abde5e84a9ac784901da3f625c5a7\" returns successfully" Aug 5 22:02:44.944678 containerd[1552]: time="2024-08-05T22:02:44.944474208Z" level=info msg="shim disconnected" id=f475cf9688b5c69db3c957b10fc747b8894abde5e84a9ac784901da3f625c5a7 namespace=k8s.io Aug 5 22:02:44.944678 containerd[1552]: time="2024-08-05T22:02:44.944525567Z" level=warning msg="cleaning up after shim disconnected" id=f475cf9688b5c69db3c957b10fc747b8894abde5e84a9ac784901da3f625c5a7 namespace=k8s.io Aug 5 22:02:44.944678 containerd[1552]: time="2024-08-05T22:02:44.944533807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:02:44.955716 containerd[1552]: time="2024-08-05T22:02:44.954649010Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:02:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:02:45.381354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f475cf9688b5c69db3c957b10fc747b8894abde5e84a9ac784901da3f625c5a7-rootfs.mount: Deactivated successfully. Aug 5 22:02:45.862041 kubelet[2668]: E0805 22:02:45.861770 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:45.866829 containerd[1552]: time="2024-08-05T22:02:45.866784408Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 5 22:02:45.897341 containerd[1552]: time="2024-08-05T22:02:45.897278979Z" level=info msg="CreateContainer within sandbox \"3d539ceb6c8a2b891870e9a761caba631a919227b69193eb28ac3066fd7c6622\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a65af382abe443fa014f11e9f42fdf6f266e4dfcf03edc66f35781f2fd7d64fa\"" Aug 5 22:02:45.898907 containerd[1552]: time="2024-08-05T22:02:45.898043335Z" level=info msg="StartContainer for \"a65af382abe443fa014f11e9f42fdf6f266e4dfcf03edc66f35781f2fd7d64fa\"" Aug 5 22:02:45.965409 containerd[1552]: time="2024-08-05T22:02:45.965344118Z" level=info msg="StartContainer for \"a65af382abe443fa014f11e9f42fdf6f266e4dfcf03edc66f35781f2fd7d64fa\" returns successfully" Aug 5 22:02:46.202927 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 5 22:02:46.867565 kubelet[2668]: E0805 22:02:46.867518 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:48.476470 kubelet[2668]: E0805 22:02:48.476365 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:48.992728 systemd-networkd[1234]: lxc_health: Link UP Aug 5 22:02:49.002580 systemd-networkd[1234]: lxc_health: Gained carrier Aug 5 22:02:50.478442 kubelet[2668]: E0805 22:02:50.477442 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:50.488457 systemd-networkd[1234]: lxc_health: Gained IPv6LL Aug 5 22:02:50.495580 kubelet[2668]: I0805 22:02:50.495498 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qfjgw" podStartSLOduration=8.495458237 podCreationTimestamp="2024-08-05 22:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:02:46.881132533 +0000 UTC m=+84.343067560" watchObservedRunningTime="2024-08-05 22:02:50.495458237 +0000 UTC m=+87.957393264" Aug 5 22:02:50.873621 kubelet[2668]: E0805 22:02:50.873516 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:51.875572 kubelet[2668]: E0805 22:02:51.875539 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:02:55.069236 sshd[4493]: pam_unix(sshd:session): session closed for user core Aug 5 22:02:55.072223 systemd[1]: sshd@24-10.0.0.149:22-10.0.0.1:55786.service: Deactivated successfully. Aug 5 22:02:55.077469 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:02:55.079042 systemd-logind[1535]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:02:55.080710 systemd-logind[1535]: Removed session 25.