Aug 5 21:47:22.873937 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 21:47:22.873958 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 21:47:22.873968 kernel: KASLR enabled Aug 5 21:47:22.873976 kernel: efi: EFI v2.7 by EDK II Aug 5 21:47:22.873982 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 5 21:47:22.873988 kernel: random: crng init done Aug 5 21:47:22.873995 kernel: ACPI: Early table checksum verification disabled Aug 5 21:47:22.874001 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 5 21:47:22.874007 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 5 21:47:22.874014 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874020 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874026 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874032 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874038 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874046 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874053 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874060 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874078 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:47:22.874087 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 5 21:47:22.874093 kernel: NUMA: Failed to initialise from firmware Aug 5 21:47:22.874100 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:47:22.874106 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 5 21:47:22.874112 kernel: Zone ranges: Aug 5 21:47:22.874118 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:47:22.874124 kernel: DMA32 empty Aug 5 21:47:22.874132 kernel: Normal empty Aug 5 21:47:22.874139 kernel: Movable zone start for each node Aug 5 21:47:22.874145 kernel: Early memory node ranges Aug 5 21:47:22.874151 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 5 21:47:22.874157 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 5 21:47:22.874164 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 5 21:47:22.874170 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 5 21:47:22.874176 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 5 21:47:22.874182 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 5 21:47:22.874189 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 5 21:47:22.874195 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:47:22.874201 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 5 21:47:22.874209 kernel: psci: probing for conduit method from ACPI. Aug 5 21:47:22.874215 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 21:47:22.874221 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:47:22.874230 kernel: psci: Trusted OS migration not required Aug 5 21:47:22.874237 kernel: psci: SMC Calling Convention v1.1 Aug 5 21:47:22.874244 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 5 21:47:22.874255 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:47:22.874262 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:47:22.874269 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 5 21:47:22.874275 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:47:22.874282 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:47:22.874289 kernel: CPU features: detected: Hardware dirty bit management Aug 5 21:47:22.874296 kernel: CPU features: detected: Spectre-v4 Aug 5 21:47:22.874302 kernel: CPU features: detected: Spectre-BHB Aug 5 21:47:22.874309 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 21:47:22.874316 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 21:47:22.874324 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 21:47:22.874330 kernel: alternatives: applying boot alternatives Aug 5 21:47:22.874338 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:47:22.874345 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:47:22.874352 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:47:22.874359 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:47:22.874365 kernel: Fallback order for Node 0: 0 Aug 5 21:47:22.874372 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 5 21:47:22.874379 kernel: Policy zone: DMA Aug 5 21:47:22.874385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:47:22.874392 kernel: software IO TLB: area num 4. Aug 5 21:47:22.874400 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 5 21:47:22.874407 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Aug 5 21:47:22.874414 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 21:47:22.874420 kernel: trace event string verifier disabled Aug 5 21:47:22.874427 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:47:22.874434 kernel: rcu: RCU event tracing is enabled. Aug 5 21:47:22.874441 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 21:47:22.874448 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:47:22.874454 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:47:22.874461 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:47:22.874468 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 21:47:22.874475 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:47:22.874483 kernel: GICv3: 256 SPIs implemented Aug 5 21:47:22.874489 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:47:22.874496 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:47:22.874503 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 21:47:22.874509 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 5 21:47:22.874516 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 5 21:47:22.874523 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 21:47:22.874530 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Aug 5 21:47:22.874536 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 5 21:47:22.874543 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 5 21:47:22.874550 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:47:22.874558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:47:22.874564 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 21:47:22.874571 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 21:47:22.874578 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 21:47:22.874585 kernel: arm-pv: using stolen time PV Aug 5 21:47:22.874592 kernel: Console: colour dummy device 80x25 Aug 5 21:47:22.874599 kernel: ACPI: Core revision 20230628 Aug 5 21:47:22.874606 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 21:47:22.874613 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:47:22.874620 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:47:22.874628 kernel: SELinux: Initializing. Aug 5 21:47:22.874634 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:47:22.874642 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:47:22.874649 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:47:22.874656 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:47:22.874663 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:47:22.874670 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:47:22.874677 kernel: Platform MSI: ITS@0x8080000 domain created Aug 5 21:47:22.874683 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 5 21:47:22.874692 kernel: Remapping and enabling EFI services. Aug 5 21:47:22.874699 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:47:22.874706 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:47:22.874713 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 5 21:47:22.874720 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 5 21:47:22.874726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:47:22.874733 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 21:47:22.874740 kernel: Detected PIPT I-cache on CPU2 Aug 5 21:47:22.874747 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 5 21:47:22.874754 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 5 21:47:22.874762 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:47:22.874769 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 5 21:47:22.874781 kernel: Detected PIPT I-cache on CPU3 Aug 5 21:47:22.874789 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 5 21:47:22.874796 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 5 21:47:22.874807 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:47:22.874814 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 5 21:47:22.874821 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 21:47:22.874829 kernel: SMP: Total of 4 processors activated. Aug 5 21:47:22.874837 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:47:22.874845 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 21:47:22.874852 kernel: CPU features: detected: Common not Private translations Aug 5 21:47:22.874860 kernel: CPU features: detected: CRC32 instructions Aug 5 21:47:22.874873 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 5 21:47:22.874880 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 21:47:22.874887 kernel: CPU features: detected: LSE atomic instructions Aug 5 21:47:22.874895 kernel: CPU features: detected: Privileged Access Never Aug 5 21:47:22.874904 kernel: CPU features: detected: RAS Extension Support Aug 5 21:47:22.874911 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 5 21:47:22.874919 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:47:22.874926 kernel: alternatives: applying system-wide alternatives Aug 5 21:47:22.874933 kernel: devtmpfs: initialized Aug 5 21:47:22.874941 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:47:22.874948 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 21:47:22.874955 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:47:22.874963 kernel: SMBIOS 3.0.0 present. Aug 5 21:47:22.874971 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 5 21:47:22.874979 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:47:22.874986 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:47:22.874993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:47:22.875000 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:47:22.875008 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:47:22.875015 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Aug 5 21:47:22.875022 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:47:22.875029 kernel: cpuidle: using governor menu Aug 5 21:47:22.875038 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:47:22.875045 kernel: ASID allocator initialised with 32768 entries Aug 5 21:47:22.875052 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:47:22.875059 kernel: Serial: AMBA PL011 UART driver Aug 5 21:47:22.875072 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 21:47:22.875081 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 21:47:22.875088 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:47:22.875095 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:47:22.875103 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:47:22.875112 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:47:22.875119 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:47:22.875126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:47:22.875133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:47:22.875140 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:47:22.875148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:47:22.875155 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:47:22.875162 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:47:22.875169 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:47:22.875178 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:47:22.875185 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:47:22.875192 kernel: ACPI: Interpreter enabled Aug 5 21:47:22.875199 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:47:22.875206 kernel: ACPI: MCFG table detected, 1 entries Aug 5 21:47:22.875213 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 5 21:47:22.875221 kernel: printk: console [ttyAMA0] enabled Aug 5 21:47:22.875228 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 21:47:22.875358 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 21:47:22.875434 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 21:47:22.875499 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 21:47:22.875560 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 5 21:47:22.875622 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 5 21:47:22.875631 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 5 21:47:22.875639 kernel: PCI host bridge to bus 0000:00 Aug 5 21:47:22.875706 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 5 21:47:22.875783 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 21:47:22.875843 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 5 21:47:22.875911 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 21:47:22.875992 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 5 21:47:22.876106 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 21:47:22.876180 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 5 21:47:22.876249 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 5 21:47:22.876318 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 21:47:22.876384 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 21:47:22.876448 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 5 21:47:22.876511 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 5 21:47:22.876570 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 5 21:47:22.876628 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 21:47:22.876687 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 5 21:47:22.876697 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 21:47:22.876707 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 21:47:22.876714 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 21:47:22.876722 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 21:47:22.876730 kernel: iommu: Default domain type: Translated Aug 5 21:47:22.876737 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:47:22.876744 kernel: efivars: Registered efivars operations Aug 5 21:47:22.876752 kernel: vgaarb: loaded Aug 5 21:47:22.876761 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:47:22.876769 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:47:22.876777 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:47:22.876784 kernel: pnp: PnP ACPI init Aug 5 21:47:22.876873 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 5 21:47:22.876888 kernel: pnp: PnP ACPI: found 1 devices Aug 5 21:47:22.876896 kernel: NET: Registered PF_INET protocol family Aug 5 21:47:22.876905 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:47:22.876919 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:47:22.876926 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:47:22.876934 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:47:22.876942 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:47:22.876950 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:47:22.876957 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:47:22.876965 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:47:22.876974 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:47:22.876982 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:47:22.876991 kernel: kvm [1]: HYP mode not available Aug 5 21:47:22.876999 kernel: Initialise system trusted keyrings Aug 5 21:47:22.877021 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:47:22.877032 kernel: Key type asymmetric registered Aug 5 21:47:22.877044 kernel: Asymmetric key parser 'x509' registered Aug 5 21:47:22.877051 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:47:22.877058 kernel: io scheduler mq-deadline registered Aug 5 21:47:22.877065 kernel: io scheduler kyber registered Aug 5 21:47:22.877088 kernel: io scheduler bfq registered Aug 5 21:47:22.877099 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 21:47:22.877106 kernel: ACPI: button: Power Button [PWRB] Aug 5 21:47:22.877114 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 21:47:22.877186 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 5 21:47:22.877197 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:47:22.877204 kernel: thunder_xcv, ver 1.0 Aug 5 21:47:22.877211 kernel: thunder_bgx, ver 1.0 Aug 5 21:47:22.877219 kernel: nicpf, ver 1.0 Aug 5 21:47:22.877226 kernel: nicvf, ver 1.0 Aug 5 21:47:22.877314 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:47:22.877378 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:47:22 UTC (1722894442) Aug 5 21:47:22.877388 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:47:22.877396 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 5 21:47:22.877403 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:47:22.877411 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:47:22.877418 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:47:22.877426 kernel: Segment Routing with IPv6 Aug 5 21:47:22.877436 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:47:22.877443 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:47:22.877451 kernel: Key type dns_resolver registered Aug 5 21:47:22.877459 kernel: registered taskstats version 1 Aug 5 21:47:22.877466 kernel: Loading compiled-in X.509 certificates Aug 5 21:47:22.877474 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 21:47:22.877481 kernel: Key type .fscrypt registered Aug 5 21:47:22.877488 kernel: Key type fscrypt-provisioning registered Aug 5 21:47:22.877496 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:47:22.877505 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:47:22.877512 kernel: ima: No architecture policies found Aug 5 21:47:22.877520 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:47:22.877527 kernel: clk: Disabling unused clocks Aug 5 21:47:22.877535 kernel: Freeing unused kernel memory: 39040K Aug 5 21:47:22.877542 kernel: Run /init as init process Aug 5 21:47:22.877549 kernel: with arguments: Aug 5 21:47:22.877557 kernel: /init Aug 5 21:47:22.877564 kernel: with environment: Aug 5 21:47:22.877573 kernel: HOME=/ Aug 5 21:47:22.877580 kernel: TERM=linux Aug 5 21:47:22.877587 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:47:22.877597 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:47:22.877606 systemd[1]: Detected virtualization kvm. Aug 5 21:47:22.877614 systemd[1]: Detected architecture arm64. Aug 5 21:47:22.877622 systemd[1]: Running in initrd. Aug 5 21:47:22.877629 systemd[1]: No hostname configured, using default hostname. Aug 5 21:47:22.877639 systemd[1]: Hostname set to . Aug 5 21:47:22.877647 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:47:22.877655 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:47:22.877663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:47:22.877671 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:47:22.877679 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:47:22.877687 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:47:22.877697 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:47:22.877705 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:47:22.877714 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:47:22.877722 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:47:22.877730 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:47:22.877738 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:47:22.877746 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:47:22.877755 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:47:22.877763 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:47:22.877771 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:47:22.877779 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:47:22.877787 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:47:22.877795 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:47:22.877803 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:47:22.877811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:47:22.877819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:47:22.877828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:47:22.877836 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:47:22.877844 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:47:22.877852 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:47:22.877860 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:47:22.877875 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:47:22.877883 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:47:22.877890 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:47:22.877898 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:47:22.877908 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:47:22.877916 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:47:22.877924 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:47:22.877933 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:47:22.877942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:47:22.877950 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:47:22.877975 systemd-journald[237]: Collecting audit messages is disabled. Aug 5 21:47:22.877994 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:47:22.878004 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:47:22.878012 systemd-journald[237]: Journal started Aug 5 21:47:22.878031 systemd-journald[237]: Runtime Journal (/run/log/journal/0c27f5eece7e4c1cb6c8abc37d3f61f5) is 5.9M, max 47.3M, 41.4M free. Aug 5 21:47:22.862844 systemd-modules-load[238]: Inserted module 'overlay' Aug 5 21:47:22.880899 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:47:22.880925 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:47:22.882818 systemd-modules-load[238]: Inserted module 'br_netfilter' Aug 5 21:47:22.884136 kernel: Bridge firewalling registered Aug 5 21:47:22.884138 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:47:22.886422 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:47:22.889362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:47:22.891556 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:47:22.892843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:47:22.895442 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:47:22.900477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:47:22.902661 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:47:22.905005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:47:22.908371 dracut-cmdline[270]: dracut-dracut-053 Aug 5 21:47:22.910980 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:47:22.935996 systemd-resolved[279]: Positive Trust Anchors: Aug 5 21:47:22.936014 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:47:22.936044 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:47:22.940578 systemd-resolved[279]: Defaulting to hostname 'linux'. Aug 5 21:47:22.941588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:47:22.944351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:47:22.983099 kernel: SCSI subsystem initialized Aug 5 21:47:22.988088 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:47:22.995093 kernel: iscsi: registered transport (tcp) Aug 5 21:47:23.008118 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:47:23.008160 kernel: QLogic iSCSI HBA Driver Aug 5 21:47:23.052146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:47:23.060232 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:47:23.076693 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:47:23.077530 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:47:23.077541 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:47:23.127090 kernel: raid6: neonx8 gen() 15684 MB/s Aug 5 21:47:23.144096 kernel: raid6: neonx4 gen() 15612 MB/s Aug 5 21:47:23.161097 kernel: raid6: neonx2 gen() 13211 MB/s Aug 5 21:47:23.178097 kernel: raid6: neonx1 gen() 10429 MB/s Aug 5 21:47:23.195091 kernel: raid6: int64x8 gen() 6936 MB/s Aug 5 21:47:23.212093 kernel: raid6: int64x4 gen() 7322 MB/s Aug 5 21:47:23.229091 kernel: raid6: int64x2 gen() 6127 MB/s Aug 5 21:47:23.246091 kernel: raid6: int64x1 gen() 5058 MB/s Aug 5 21:47:23.246122 kernel: raid6: using algorithm neonx8 gen() 15684 MB/s Aug 5 21:47:23.263098 kernel: raid6: .... xor() 11904 MB/s, rmw enabled Aug 5 21:47:23.263124 kernel: raid6: using neon recovery algorithm Aug 5 21:47:23.268220 kernel: xor: measuring software checksum speed Aug 5 21:47:23.268237 kernel: 8regs : 19873 MB/sec Aug 5 21:47:23.269084 kernel: 32regs : 19720 MB/sec Aug 5 21:47:23.270233 kernel: arm64_neon : 27234 MB/sec Aug 5 21:47:23.270251 kernel: xor: using function: arm64_neon (27234 MB/sec) Aug 5 21:47:23.321094 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:47:23.331859 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:47:23.342226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:47:23.353605 systemd-udevd[457]: Using default interface naming scheme 'v255'. Aug 5 21:47:23.356772 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:47:23.359866 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:47:23.373152 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Aug 5 21:47:23.398032 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:47:23.408209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:47:23.447188 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:47:23.455202 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:47:23.466730 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:47:23.468372 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:47:23.471458 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:47:23.472829 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:47:23.481596 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:47:23.487101 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 5 21:47:23.496417 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 21:47:23.496512 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 21:47:23.496524 kernel: GPT:9289727 != 19775487 Aug 5 21:47:23.496545 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 21:47:23.496555 kernel: GPT:9289727 != 19775487 Aug 5 21:47:23.496564 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 21:47:23.496576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:47:23.490306 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:47:23.494371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:47:23.494470 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:47:23.497599 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:47:23.498725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:47:23.498847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:47:23.500556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:47:23.516133 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (501) Aug 5 21:47:23.516172 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (518) Aug 5 21:47:23.515317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:47:23.527735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:47:23.536246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 21:47:23.540532 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 21:47:23.544841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 21:47:23.548432 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 21:47:23.549379 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 21:47:23.558219 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:47:23.559844 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:47:23.565897 disk-uuid[547]: Primary Header is updated. Aug 5 21:47:23.565897 disk-uuid[547]: Secondary Entries is updated. Aug 5 21:47:23.565897 disk-uuid[547]: Secondary Header is updated. Aug 5 21:47:23.570100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:47:23.579214 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:47:23.585086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:47:23.588090 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:47:24.590086 disk-uuid[551]: The operation has completed successfully. Aug 5 21:47:24.591008 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:47:24.608531 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:47:24.608631 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:47:24.625244 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:47:24.628080 sh[573]: Success Aug 5 21:47:24.645097 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:47:24.672490 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:47:24.680331 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:47:24.681883 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:47:24.693092 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 21:47:24.693133 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:47:24.693144 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:47:24.693153 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:47:24.694082 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:47:24.697347 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:47:24.698492 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:47:24.699195 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:47:24.701736 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:47:24.711670 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:47:24.711710 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:47:24.712239 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:47:24.714127 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:47:24.722838 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:47:24.724183 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:47:24.730836 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:47:24.737240 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:47:24.794107 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:47:24.802230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:47:24.830362 systemd-networkd[761]: lo: Link UP Aug 5 21:47:24.830374 systemd-networkd[761]: lo: Gained carrier Aug 5 21:47:24.831062 systemd-networkd[761]: Enumeration completed Aug 5 21:47:24.831185 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:47:24.831500 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:47:24.831503 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:47:24.832272 systemd-networkd[761]: eth0: Link UP Aug 5 21:47:24.832276 systemd-networkd[761]: eth0: Gained carrier Aug 5 21:47:24.832283 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:47:24.833294 systemd[1]: Reached target network.target - Network. Aug 5 21:47:24.840923 ignition[673]: Ignition 2.19.0 Aug 5 21:47:24.840929 ignition[673]: Stage: fetch-offline Aug 5 21:47:24.840965 ignition[673]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:47:24.840974 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:47:24.841049 ignition[673]: parsed url from cmdline: "" Aug 5 21:47:24.841052 ignition[673]: no config URL provided Aug 5 21:47:24.841057 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:47:24.841064 ignition[673]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:47:24.841104 ignition[673]: op(1): [started] loading QEMU firmware config module Aug 5 21:47:24.841108 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 21:47:24.849141 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 21:47:24.848275 ignition[673]: op(1): [finished] loading QEMU firmware config module Aug 5 21:47:24.885313 ignition[673]: parsing config with SHA512: 68d1d43428f78f9e8d43b99c83e03b780f5eba028cf59e78f09927d35ad3c59fee90408564cb124cb5fcd8bb14fe2adad87aa37de5f5419c112cf91d2886fae1 Aug 5 21:47:24.889544 unknown[673]: fetched base config from "system" Aug 5 21:47:24.889554 unknown[673]: fetched user config from "qemu" Aug 5 21:47:24.890014 ignition[673]: fetch-offline: fetch-offline passed Aug 5 21:47:24.890090 ignition[673]: Ignition finished successfully Aug 5 21:47:24.892726 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.80 Aug 5 21:47:24.892741 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Aug 5 21:47:24.892933 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:47:24.894901 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 21:47:24.910317 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:47:24.920515 ignition[773]: Ignition 2.19.0 Aug 5 21:47:24.920524 ignition[773]: Stage: kargs Aug 5 21:47:24.920669 ignition[773]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:47:24.920677 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:47:24.921544 ignition[773]: kargs: kargs passed Aug 5 21:47:24.923726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:47:24.921587 ignition[773]: Ignition finished successfully Aug 5 21:47:24.938252 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:47:24.948358 ignition[782]: Ignition 2.19.0 Aug 5 21:47:24.948366 ignition[782]: Stage: disks Aug 5 21:47:24.948521 ignition[782]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:47:24.948530 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:47:24.949406 ignition[782]: disks: disks passed Aug 5 21:47:24.951239 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:47:24.949450 ignition[782]: Ignition finished successfully Aug 5 21:47:24.952388 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:47:24.953676 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:47:24.955470 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:47:24.956955 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:47:24.958685 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:47:24.967244 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:47:24.976649 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 21:47:24.980315 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:47:24.982291 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:47:25.024923 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:47:25.026363 kernel: EXT4-fs (vda9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 21:47:25.026094 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:47:25.036156 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:47:25.037793 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:47:25.038980 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 21:47:25.039060 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:47:25.039137 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:47:25.046974 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Aug 5 21:47:25.045050 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:47:25.050755 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:47:25.050774 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:47:25.050784 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:47:25.046719 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:47:25.053572 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:47:25.053745 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:47:25.095127 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:47:25.098093 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:47:25.101803 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:47:25.104725 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:47:25.169950 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:47:25.185214 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:47:25.187454 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:47:25.192085 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:47:25.204724 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:47:25.210689 ignition[915]: INFO : Ignition 2.19.0 Aug 5 21:47:25.210689 ignition[915]: INFO : Stage: mount Aug 5 21:47:25.210689 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:47:25.210689 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:47:25.210689 ignition[915]: INFO : mount: mount passed Aug 5 21:47:25.210689 ignition[915]: INFO : Ignition finished successfully Aug 5 21:47:25.212047 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:47:25.224183 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:47:25.691305 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:47:25.702248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:47:25.708148 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Aug 5 21:47:25.708177 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:47:25.708192 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:47:25.709229 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:47:25.711082 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:47:25.712227 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:47:25.727374 ignition[945]: INFO : Ignition 2.19.0 Aug 5 21:47:25.727374 ignition[945]: INFO : Stage: files Aug 5 21:47:25.728871 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:47:25.728871 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:47:25.728871 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:47:25.731543 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:47:25.731543 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:47:25.731543 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:47:25.731543 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:47:25.731543 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:47:25.731422 unknown[945]: wrote ssh authorized keys file for user: core Aug 5 21:47:25.737435 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:47:25.737435 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:47:25.972406 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 21:47:26.014579 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:47:26.016426 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 5 21:47:26.016426 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 5 21:47:26.143406 systemd-networkd[761]: eth0: Gained IPv6LL Aug 5 21:47:26.342482 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 5 21:47:26.491170 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:47:26.493008 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Aug 5 21:47:26.712060 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 5 21:47:27.014293 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:47:27.014293 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 5 21:47:27.017553 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:47:27.017553 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:47:27.017553 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 5 21:47:27.017553 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 5 21:47:27.017553 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 21:47:27.017553 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 21:47:27.017553 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 5 21:47:27.017553 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 21:47:27.043355 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 21:47:27.047015 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 21:47:27.049575 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 21:47:27.049575 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:47:27.049575 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:47:27.049575 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:47:27.049575 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:47:27.049575 ignition[945]: INFO : files: files passed Aug 5 21:47:27.049575 ignition[945]: INFO : Ignition finished successfully Aug 5 21:47:27.050087 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:47:27.061219 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:47:27.064262 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:47:27.065832 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:47:27.065925 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:47:27.071822 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 21:47:27.074239 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:47:27.074239 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:47:27.077248 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:47:27.078147 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:47:27.079724 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:47:27.092252 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:47:27.112890 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:47:27.113042 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:47:27.115204 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:47:27.116755 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:47:27.118307 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:47:27.119027 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:47:27.134380 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:47:27.141205 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:47:27.150162 systemd[1]: Stopped target network.target - Network. Aug 5 21:47:27.151018 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:47:27.152665 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:47:27.154485 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:47:27.156090 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:47:27.156204 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:47:27.158579 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:47:27.160448 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:47:27.161951 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:47:27.163468 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:47:27.165295 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:47:27.167138 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:47:27.168908 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:47:27.170629 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:47:27.172350 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:47:27.173948 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:47:27.175383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:47:27.175494 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:47:27.177689 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:47:27.179547 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:47:27.181193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:47:27.183076 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:47:27.184276 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:47:27.184391 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:47:27.186903 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:47:27.187018 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:47:27.188936 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:47:27.190381 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:47:27.191128 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:47:27.192362 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:47:27.193797 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:47:27.195398 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:47:27.195481 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:47:27.197379 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:47:27.197457 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:47:27.198926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:47:27.199029 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:47:27.200674 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:47:27.200770 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:47:27.213247 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:47:27.214729 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:47:27.215859 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:47:27.217503 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:47:27.219192 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:47:27.219317 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:47:27.221220 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:47:27.221315 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:47:27.224120 systemd-networkd[761]: eth0: DHCPv6 lease lost Aug 5 21:47:27.226823 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:47:27.226921 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:47:27.230125 ignition[1000]: INFO : Ignition 2.19.0 Aug 5 21:47:27.230125 ignition[1000]: INFO : Stage: umount Aug 5 21:47:27.232684 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:47:27.232684 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:47:27.230356 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:47:27.235963 ignition[1000]: INFO : umount: umount passed Aug 5 21:47:27.235963 ignition[1000]: INFO : Ignition finished successfully Aug 5 21:47:27.230523 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:47:27.235185 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:47:27.235791 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:47:27.235897 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:47:27.237374 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:47:27.237460 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:47:27.239707 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:47:27.239751 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:47:27.241569 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:47:27.241622 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:47:27.245416 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:47:27.245462 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:47:27.247003 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:47:27.247042 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:47:27.248988 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:47:27.249031 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:47:27.262171 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:47:27.262987 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:47:27.263047 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:47:27.264955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:47:27.264999 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:47:27.266735 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:47:27.266775 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:47:27.268701 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:47:27.268742 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:47:27.270650 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:47:27.279474 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:47:27.279571 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:47:27.297803 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:47:27.297977 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:47:27.300250 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:47:27.300292 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:47:27.302010 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:47:27.302039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:47:27.304006 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:47:27.304054 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:47:27.306350 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:47:27.306391 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:47:27.308598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:47:27.308641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:47:27.325258 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:47:27.326228 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:47:27.326281 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:47:27.328238 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 21:47:27.328281 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:47:27.330170 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:47:27.330213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:47:27.332192 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:47:27.332233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:47:27.334263 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:47:27.334350 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:47:27.335890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:47:27.335971 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:47:27.338208 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:47:27.339162 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:47:27.339219 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:47:27.341407 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:47:27.350142 systemd[1]: Switching root. Aug 5 21:47:27.387912 systemd-journald[237]: Journal stopped Aug 5 21:47:28.048711 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Aug 5 21:47:28.048764 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 21:47:28.048776 kernel: SELinux: policy capability open_perms=1 Aug 5 21:47:28.048786 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 21:47:28.048796 kernel: SELinux: policy capability always_check_network=0 Aug 5 21:47:28.048805 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 21:47:28.048815 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 21:47:28.048828 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 21:47:28.048841 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 21:47:28.048863 kernel: audit: type=1403 audit(1722894447.539:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 21:47:28.048876 systemd[1]: Successfully loaded SELinux policy in 31.446ms. Aug 5 21:47:28.048896 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.215ms. Aug 5 21:47:28.048908 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:47:28.048919 systemd[1]: Detected virtualization kvm. Aug 5 21:47:28.048929 systemd[1]: Detected architecture arm64. Aug 5 21:47:28.048941 systemd[1]: Detected first boot. Aug 5 21:47:28.048951 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:47:28.048964 zram_generator::config[1043]: No configuration found. Aug 5 21:47:28.048976 systemd[1]: Populated /etc with preset unit settings. Aug 5 21:47:28.048986 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 21:47:28.049000 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 21:47:28.049010 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 21:47:28.049022 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 21:47:28.049032 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 21:47:28.049042 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 21:47:28.049054 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 21:47:28.049065 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 21:47:28.049087 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 21:47:28.049099 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 21:47:28.049109 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 21:47:28.049124 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:47:28.049135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:47:28.049146 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 21:47:28.049156 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 21:47:28.049174 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 21:47:28.049185 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:47:28.049197 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 21:47:28.049208 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:47:28.049220 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 21:47:28.049230 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 21:47:28.049255 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 21:47:28.049267 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 21:47:28.049278 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:47:28.049289 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:47:28.049299 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:47:28.049310 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:47:28.049321 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 21:47:28.049331 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 21:47:28.049343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:47:28.049354 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:47:28.049365 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:47:28.049377 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 21:47:28.049388 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 21:47:28.049399 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 21:47:28.049410 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 21:47:28.049420 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 21:47:28.049431 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 21:47:28.049442 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 21:47:28.049453 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 21:47:28.049465 systemd[1]: Reached target machines.target - Containers. Aug 5 21:47:28.049476 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 21:47:28.049487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:47:28.049497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:47:28.049508 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 21:47:28.049519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:47:28.049529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:47:28.049539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:47:28.049550 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 21:47:28.049562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:47:28.049572 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 21:47:28.049583 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 21:47:28.049593 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 21:47:28.049603 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 21:47:28.049613 kernel: fuse: init (API version 7.39) Aug 5 21:47:28.049623 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 21:47:28.049633 kernel: loop: module loaded Aug 5 21:47:28.049645 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:47:28.049655 kernel: ACPI: bus type drm_connector registered Aug 5 21:47:28.049666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:47:28.049677 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 21:47:28.049688 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 21:47:28.049713 systemd-journald[1109]: Collecting audit messages is disabled. Aug 5 21:47:28.049734 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:47:28.049745 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 21:47:28.049757 systemd-journald[1109]: Journal started Aug 5 21:47:28.049777 systemd-journald[1109]: Runtime Journal (/run/log/journal/0c27f5eece7e4c1cb6c8abc37d3f61f5) is 5.9M, max 47.3M, 41.4M free. Aug 5 21:47:28.049810 systemd[1]: Stopped verity-setup.service. Aug 5 21:47:27.874905 systemd[1]: Queued start job for default target multi-user.target. Aug 5 21:47:27.892891 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 21:47:27.893260 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 21:47:28.053684 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:47:28.054295 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 21:47:28.055196 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 21:47:28.056090 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 21:47:28.056890 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 21:47:28.057826 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 21:47:28.058749 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 21:47:28.059739 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 21:47:28.060862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:47:28.062146 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 21:47:28.062292 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 21:47:28.063614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:47:28.063759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:47:28.066415 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:47:28.066569 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:47:28.067928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:47:28.068058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:47:28.069335 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 21:47:28.069480 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 21:47:28.070781 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:47:28.070917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:47:28.074103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:47:28.075132 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 21:47:28.076582 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 21:47:28.088804 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 21:47:28.102205 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 21:47:28.104149 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 21:47:28.105174 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 21:47:28.105213 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:47:28.107013 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 21:47:28.108951 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 21:47:28.110966 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 21:47:28.111891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:47:28.113223 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 21:47:28.116235 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 21:47:28.117176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:47:28.118863 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 21:47:28.119995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:47:28.121798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:47:28.124111 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 21:47:28.129302 systemd-journald[1109]: Time spent on flushing to /var/log/journal/0c27f5eece7e4c1cb6c8abc37d3f61f5 is 25.689ms for 862 entries. Aug 5 21:47:28.129302 systemd-journald[1109]: System Journal (/var/log/journal/0c27f5eece7e4c1cb6c8abc37d3f61f5) is 8.0M, max 195.6M, 187.6M free. Aug 5 21:47:28.171498 systemd-journald[1109]: Received client request to flush runtime journal. Aug 5 21:47:28.171558 kernel: loop0: detected capacity change from 0 to 113712 Aug 5 21:47:28.171579 kernel: block loop0: the capability attribute has been deprecated. Aug 5 21:47:28.171658 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 21:47:28.128241 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:47:28.132449 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:47:28.142842 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 21:47:28.143970 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 21:47:28.147107 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 21:47:28.148584 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 21:47:28.154173 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 21:47:28.166290 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 21:47:28.168122 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 21:47:28.172443 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:47:28.175943 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 21:47:28.176108 kernel: loop1: detected capacity change from 0 to 59688 Aug 5 21:47:28.179864 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Aug 5 21:47:28.179884 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Aug 5 21:47:28.183617 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 21:47:28.186597 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 21:47:28.187796 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 21:47:28.189257 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:47:28.196320 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 21:47:28.207096 kernel: loop2: detected capacity change from 0 to 194512 Aug 5 21:47:28.220006 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 21:47:28.227295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:47:28.237687 kernel: loop3: detected capacity change from 0 to 113712 Aug 5 21:47:28.242494 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Aug 5 21:47:28.242511 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Aug 5 21:47:28.247358 kernel: loop4: detected capacity change from 0 to 59688 Aug 5 21:47:28.247110 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:47:28.253131 kernel: loop5: detected capacity change from 0 to 194512 Aug 5 21:47:28.258390 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 21:47:28.258771 (sd-merge)[1181]: Merged extensions into '/usr'. Aug 5 21:47:28.261993 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 21:47:28.262005 systemd[1]: Reloading... Aug 5 21:47:28.318143 zram_generator::config[1207]: No configuration found. Aug 5 21:47:28.396638 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 21:47:28.416129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:47:28.453367 systemd[1]: Reloading finished in 190 ms. Aug 5 21:47:28.484103 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 21:47:28.485467 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 21:47:28.503404 systemd[1]: Starting ensure-sysext.service... Aug 5 21:47:28.505400 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:47:28.512061 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Aug 5 21:47:28.512089 systemd[1]: Reloading... Aug 5 21:47:28.530638 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 21:47:28.530910 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 21:47:28.531554 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 21:47:28.531765 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Aug 5 21:47:28.531809 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Aug 5 21:47:28.533897 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:47:28.533908 systemd-tmpfiles[1243]: Skipping /boot Aug 5 21:47:28.541583 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:47:28.541601 systemd-tmpfiles[1243]: Skipping /boot Aug 5 21:47:28.548088 zram_generator::config[1268]: No configuration found. Aug 5 21:47:28.636400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:47:28.672997 systemd[1]: Reloading finished in 160 ms. Aug 5 21:47:28.688841 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 21:47:28.702494 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:47:28.709712 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:47:28.712026 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 21:47:28.714197 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 21:47:28.719393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:47:28.724021 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:47:28.727241 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 21:47:28.730023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:47:28.732899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:47:28.738595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:47:28.741956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:47:28.742907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:47:28.745396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:47:28.745528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:47:28.748170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:47:28.748306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:47:28.750171 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 21:47:28.752579 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:47:28.752700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:47:28.756205 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:47:28.756408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:47:28.767483 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 21:47:28.768837 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Aug 5 21:47:28.771101 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 21:47:28.772726 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 21:47:28.774527 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 21:47:28.778065 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 21:47:28.781897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:47:28.782616 augenrules[1334]: No rules Aug 5 21:47:28.787275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:47:28.789307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:47:28.793709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:47:28.794762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:47:28.794894 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:47:28.795478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:47:28.797454 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:47:28.799625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:47:28.799749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:47:28.801411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:47:28.801580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:47:28.803433 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:47:28.803577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:47:28.812408 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 21:47:28.814052 systemd[1]: Finished ensure-sysext.service. Aug 5 21:47:28.821063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:47:28.827086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:47:28.832274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:47:28.837212 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:47:28.839163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:47:28.840306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:47:28.842097 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1349) Aug 5 21:47:28.842461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:47:28.844898 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 21:47:28.845852 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:47:28.846325 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:47:28.846464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:47:28.847841 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:47:28.847986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:47:28.853633 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 5 21:47:28.853901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:47:28.854379 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:47:28.858257 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:47:28.864458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:47:28.864621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:47:28.866024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:47:28.867182 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1361) Aug 5 21:47:28.914313 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 21:47:28.914505 systemd-resolved[1309]: Positive Trust Anchors: Aug 5 21:47:28.915619 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 21:47:28.918642 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:47:28.918682 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:47:28.921986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 21:47:28.927431 systemd-resolved[1309]: Defaulting to hostname 'linux'. Aug 5 21:47:28.928218 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 21:47:28.929356 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:47:28.930386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:47:28.932484 systemd-networkd[1378]: lo: Link UP Aug 5 21:47:28.932494 systemd-networkd[1378]: lo: Gained carrier Aug 5 21:47:28.936767 systemd-networkd[1378]: Enumeration completed Aug 5 21:47:28.936872 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:47:28.937401 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:47:28.937412 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:47:28.938188 systemd[1]: Reached target network.target - Network. Aug 5 21:47:28.938345 systemd-networkd[1378]: eth0: Link UP Aug 5 21:47:28.938353 systemd-networkd[1378]: eth0: Gained carrier Aug 5 21:47:28.938366 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:47:28.948286 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 21:47:28.949635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 21:47:28.955154 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 21:47:28.955719 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Aug 5 21:47:28.956371 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 21:47:28.956421 systemd-timesyncd[1381]: Initial clock synchronization to Mon 2024-08-05 21:47:28.610971 UTC. Aug 5 21:47:28.956839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:47:28.967148 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 21:47:28.978266 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 21:47:29.000478 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:47:29.001177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:47:29.030409 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 21:47:29.031749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:47:29.034175 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:47:29.035207 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 21:47:29.036323 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 21:47:29.037589 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 21:47:29.038769 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 21:47:29.039907 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 21:47:29.041040 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 21:47:29.041084 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:47:29.041878 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:47:29.043239 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 21:47:29.045356 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 21:47:29.057860 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 21:47:29.059891 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 21:47:29.061332 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 21:47:29.062418 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:47:29.063278 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:47:29.064122 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:47:29.064153 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:47:29.064999 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 21:47:29.066772 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:47:29.066856 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 21:47:29.069549 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 21:47:29.071241 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 21:47:29.072452 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 21:47:29.076319 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 21:47:29.082205 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 21:47:29.084242 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 21:47:29.086133 jq[1413]: false Aug 5 21:47:29.086698 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 21:47:29.091533 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 21:47:29.098445 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 21:47:29.098804 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 21:47:29.099604 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 21:47:29.102952 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 21:47:29.103880 extend-filesystems[1414]: Found loop3 Aug 5 21:47:29.103880 extend-filesystems[1414]: Found loop4 Aug 5 21:47:29.103880 extend-filesystems[1414]: Found loop5 Aug 5 21:47:29.103880 extend-filesystems[1414]: Found vda Aug 5 21:47:29.103880 extend-filesystems[1414]: Found vda1 Aug 5 21:47:29.103880 extend-filesystems[1414]: Found vda2 Aug 5 21:47:29.103880 extend-filesystems[1414]: Found vda3 Aug 5 21:47:29.103880 extend-filesystems[1414]: Found usr Aug 5 21:47:29.103880 extend-filesystems[1414]: Found vda4 Aug 5 21:47:29.103880 extend-filesystems[1414]: Found vda6 Aug 5 21:47:29.122742 extend-filesystems[1414]: Found vda7 Aug 5 21:47:29.122742 extend-filesystems[1414]: Found vda9 Aug 5 21:47:29.122742 extend-filesystems[1414]: Checking size of /dev/vda9 Aug 5 21:47:29.116840 dbus-daemon[1412]: [system] SELinux support is enabled Aug 5 21:47:29.105491 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 21:47:29.108348 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 21:47:29.108492 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 21:47:29.137458 jq[1426]: true Aug 5 21:47:29.110342 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 21:47:29.110468 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 21:47:29.117223 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 21:47:29.132166 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 21:47:29.132325 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 21:47:29.134597 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 21:47:29.134649 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 21:47:29.136676 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 21:47:29.136704 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 21:47:29.140215 jq[1440]: true Aug 5 21:47:29.145273 extend-filesystems[1414]: Resized partition /dev/vda9 Aug 5 21:47:29.152739 extend-filesystems[1450]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 21:47:29.157192 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 21:47:29.157235 tar[1431]: linux-arm64/helm Aug 5 21:47:29.156866 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 21:47:29.163697 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1361) Aug 5 21:47:29.166680 update_engine[1424]: I0805 21:47:29.166484 1424 main.cc:92] Flatcar Update Engine starting Aug 5 21:47:29.168213 systemd[1]: Started update-engine.service - Update Engine. Aug 5 21:47:29.168621 update_engine[1424]: I0805 21:47:29.168201 1424 update_check_scheduler.cc:74] Next update check in 8m33s Aug 5 21:47:29.178332 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 21:47:29.181666 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 21:47:29.183763 systemd-logind[1422]: New seat seat0. Aug 5 21:47:29.185819 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 21:47:29.191852 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 21:47:29.210667 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 21:47:29.210667 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 21:47:29.210667 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 21:47:29.214562 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Aug 5 21:47:29.213513 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 21:47:29.215343 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 21:47:29.217624 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:47:29.221482 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 21:47:29.223215 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 21:47:29.226579 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 21:47:29.356507 containerd[1442]: time="2024-08-05T21:47:29.356408120Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 21:47:29.383823 containerd[1442]: time="2024-08-05T21:47:29.383774685Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 21:47:29.383823 containerd[1442]: time="2024-08-05T21:47:29.383818814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385414 containerd[1442]: time="2024-08-05T21:47:29.385272974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385414 containerd[1442]: time="2024-08-05T21:47:29.385311209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385529 containerd[1442]: time="2024-08-05T21:47:29.385506557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385573 containerd[1442]: time="2024-08-05T21:47:29.385530248Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 21:47:29.385618 containerd[1442]: time="2024-08-05T21:47:29.385603886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385666 containerd[1442]: time="2024-08-05T21:47:29.385652647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385688 containerd[1442]: time="2024-08-05T21:47:29.385666387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385823 containerd[1442]: time="2024-08-05T21:47:29.385723759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385916 containerd[1442]: time="2024-08-05T21:47:29.385896065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 21:47:29.385939 containerd[1442]: time="2024-08-05T21:47:29.385919986Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 21:47:29.385939 containerd[1442]: time="2024-08-05T21:47:29.385929708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:47:29.386047 containerd[1442]: time="2024-08-05T21:47:29.386017392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:47:29.386047 containerd[1442]: time="2024-08-05T21:47:29.386042997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 21:47:29.386124 containerd[1442]: time="2024-08-05T21:47:29.386107603Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 21:47:29.386240 containerd[1442]: time="2024-08-05T21:47:29.386125017Z" level=info msg="metadata content store policy set" policy=shared Aug 5 21:47:29.389531 containerd[1442]: time="2024-08-05T21:47:29.389501536Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 21:47:29.389586 containerd[1442]: time="2024-08-05T21:47:29.389537743Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 21:47:29.389586 containerd[1442]: time="2024-08-05T21:47:29.389549761Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 21:47:29.391327 containerd[1442]: time="2024-08-05T21:47:29.391309113Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 21:47:29.391327 containerd[1442]: time="2024-08-05T21:47:29.391326795Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 21:47:29.391402 containerd[1442]: time="2024-08-05T21:47:29.391336402Z" level=info msg="NRI interface is disabled by configuration." Aug 5 21:47:29.391402 containerd[1442]: time="2024-08-05T21:47:29.391347233Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 21:47:29.391543 containerd[1442]: time="2024-08-05T21:47:29.391461212Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 21:47:29.391543 containerd[1442]: time="2024-08-05T21:47:29.391488003Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 21:47:29.391543 containerd[1442]: time="2024-08-05T21:47:29.391500212Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 21:47:29.391543 containerd[1442]: time="2024-08-05T21:47:29.391513455Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 21:47:29.391543 containerd[1442]: time="2024-08-05T21:47:29.391526085Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 21:47:29.391543 containerd[1442]: time="2024-08-05T21:47:29.391540935Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 21:47:29.391707 containerd[1442]: time="2024-08-05T21:47:29.391553030Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 21:47:29.391707 containerd[1442]: time="2024-08-05T21:47:29.391564397Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 21:47:29.391707 containerd[1442]: time="2024-08-05T21:47:29.391579630Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 21:47:29.391707 containerd[1442]: time="2024-08-05T21:47:29.391591380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 21:47:29.391707 containerd[1442]: time="2024-08-05T21:47:29.391602785Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 21:47:29.391707 containerd[1442]: time="2024-08-05T21:47:29.391612966Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 21:47:29.391707 containerd[1442]: time="2024-08-05T21:47:29.391697856Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 21:47:29.391971 containerd[1442]: time="2024-08-05T21:47:29.391953293Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 21:47:29.392011 containerd[1442]: time="2024-08-05T21:47:29.391981883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392011 containerd[1442]: time="2024-08-05T21:47:29.392000599Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 21:47:29.392056 containerd[1442]: time="2024-08-05T21:47:29.392021917Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 21:47:29.392718 containerd[1442]: time="2024-08-05T21:47:29.392666365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392718 containerd[1442]: time="2024-08-05T21:47:29.392695989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392718 containerd[1442]: time="2024-08-05T21:47:29.392708772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392718 containerd[1442]: time="2024-08-05T21:47:29.392719756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392802 containerd[1442]: time="2024-08-05T21:47:29.392731813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392802 containerd[1442]: time="2024-08-05T21:47:29.392744979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392802 containerd[1442]: time="2024-08-05T21:47:29.392755887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392802 containerd[1442]: time="2024-08-05T21:47:29.392766488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.392802 containerd[1442]: time="2024-08-05T21:47:29.392783367Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 21:47:29.393005 containerd[1442]: time="2024-08-05T21:47:29.392912922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.393005 containerd[1442]: time="2024-08-05T21:47:29.392929342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.393005 containerd[1442]: time="2024-08-05T21:47:29.392941092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.393005 containerd[1442]: time="2024-08-05T21:47:29.392952421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.393005 containerd[1442]: time="2024-08-05T21:47:29.392963558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.393005 containerd[1442]: time="2024-08-05T21:47:29.392978217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.393005 containerd[1442]: time="2024-08-05T21:47:29.392990120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.393005 containerd[1442]: time="2024-08-05T21:47:29.393000186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 21:47:29.393433 containerd[1442]: time="2024-08-05T21:47:29.393376146Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 21:47:29.393433 containerd[1442]: time="2024-08-05T21:47:29.393433938Z" level=info msg="Connect containerd service" Aug 5 21:47:29.393566 containerd[1442]: time="2024-08-05T21:47:29.393460538Z" level=info msg="using legacy CRI server" Aug 5 21:47:29.393566 containerd[1442]: time="2024-08-05T21:47:29.393467007Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 21:47:29.393600 containerd[1442]: time="2024-08-05T21:47:29.393594113Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 21:47:29.394185 containerd[1442]: time="2024-08-05T21:47:29.394155966Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:47:29.394247 containerd[1442]: time="2024-08-05T21:47:29.394205033Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 21:47:29.394247 containerd[1442]: time="2024-08-05T21:47:29.394220955Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 21:47:29.394247 containerd[1442]: time="2024-08-05T21:47:29.394231021Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 21:47:29.394247 containerd[1442]: time="2024-08-05T21:47:29.394241661Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 21:47:29.394846 containerd[1442]: time="2024-08-05T21:47:29.394565530Z" level=info msg="Start subscribing containerd event" Aug 5 21:47:29.394846 containerd[1442]: time="2024-08-05T21:47:29.394689077Z" level=info msg="Start recovering state" Aug 5 21:47:29.394846 containerd[1442]: time="2024-08-05T21:47:29.394760418Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 21:47:29.395021 containerd[1442]: time="2024-08-05T21:47:29.395005330Z" level=info msg="Start event monitor" Aug 5 21:47:29.395174 containerd[1442]: time="2024-08-05T21:47:29.395157658Z" level=info msg="Start snapshots syncer" Aug 5 21:47:29.395328 containerd[1442]: time="2024-08-05T21:47:29.395238224Z" level=info msg="Start cni network conf syncer for default" Aug 5 21:47:29.395328 containerd[1442]: time="2024-08-05T21:47:29.395250624Z" level=info msg="Start streaming server" Aug 5 21:47:29.395602 containerd[1442]: time="2024-08-05T21:47:29.395036293Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 21:47:29.395881 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 21:47:29.398477 containerd[1442]: time="2024-08-05T21:47:29.396989730Z" level=info msg="containerd successfully booted in 0.041529s" Aug 5 21:47:29.503626 tar[1431]: linux-arm64/LICENSE Aug 5 21:47:29.503773 tar[1431]: linux-arm64/README.md Aug 5 21:47:29.514221 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 21:47:30.030739 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 21:47:30.048055 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 21:47:30.056302 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 21:47:30.061046 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 21:47:30.062175 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 21:47:30.064443 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 21:47:30.075995 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 21:47:30.086307 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 21:47:30.087987 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 21:47:30.088944 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 21:47:30.751300 systemd-networkd[1378]: eth0: Gained IPv6LL Aug 5 21:47:30.753731 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 21:47:30.755484 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 21:47:30.766376 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 21:47:30.768441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:47:30.770257 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 21:47:30.783481 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 21:47:30.783975 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 21:47:30.786047 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 21:47:30.789589 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 21:47:31.223312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:47:31.224708 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 21:47:31.227224 systemd[1]: Startup finished in 524ms (kernel) + 4.838s (initrd) + 3.721s (userspace) = 9.084s. Aug 5 21:47:31.227992 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:47:31.670430 kubelet[1526]: E0805 21:47:31.670300 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:47:31.673536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:47:31.673667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:47:34.874855 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 21:47:34.875969 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:48018.service - OpenSSH per-connection server daemon (10.0.0.1:48018). Aug 5 21:47:34.922720 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 48018 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:47:34.924366 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:34.933665 systemd-logind[1422]: New session 1 of user core. Aug 5 21:47:34.934549 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 21:47:34.941251 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 21:47:34.951098 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 21:47:34.953013 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 21:47:34.958765 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:35.029631 systemd[1544]: Queued start job for default target default.target. Aug 5 21:47:35.039860 systemd[1544]: Created slice app.slice - User Application Slice. Aug 5 21:47:35.039887 systemd[1544]: Reached target paths.target - Paths. Aug 5 21:47:35.039899 systemd[1544]: Reached target timers.target - Timers. Aug 5 21:47:35.040956 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 21:47:35.051087 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 21:47:35.051140 systemd[1544]: Reached target sockets.target - Sockets. Aug 5 21:47:35.051152 systemd[1544]: Reached target basic.target - Basic System. Aug 5 21:47:35.051184 systemd[1544]: Reached target default.target - Main User Target. Aug 5 21:47:35.051207 systemd[1544]: Startup finished in 87ms. Aug 5 21:47:35.051401 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 21:47:35.052557 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 21:47:35.113839 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:48028.service - OpenSSH per-connection server daemon (10.0.0.1:48028). Aug 5 21:47:35.150491 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 48028 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:47:35.151672 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:35.155147 systemd-logind[1422]: New session 2 of user core. Aug 5 21:47:35.164220 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 21:47:35.213826 sshd[1555]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:35.224241 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:48028.service: Deactivated successfully. Aug 5 21:47:35.225784 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 21:47:35.228280 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Aug 5 21:47:35.229847 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:48032.service - OpenSSH per-connection server daemon (10.0.0.1:48032). Aug 5 21:47:35.230873 systemd-logind[1422]: Removed session 2. Aug 5 21:47:35.266933 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 48032 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:47:35.268036 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:35.271804 systemd-logind[1422]: New session 3 of user core. Aug 5 21:47:35.284249 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 21:47:35.330691 sshd[1562]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:35.339334 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:48032.service: Deactivated successfully. Aug 5 21:47:35.342299 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 21:47:35.343487 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Aug 5 21:47:35.344526 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:48038.service - OpenSSH per-connection server daemon (10.0.0.1:48038). Aug 5 21:47:35.345273 systemd-logind[1422]: Removed session 3. Aug 5 21:47:35.382102 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 48038 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:47:35.383228 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:35.387085 systemd-logind[1422]: New session 4 of user core. Aug 5 21:47:35.397220 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 21:47:35.448021 sshd[1569]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:35.457324 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:48038.service: Deactivated successfully. Aug 5 21:47:35.460270 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 21:47:35.461462 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Aug 5 21:47:35.473299 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:48040.service - OpenSSH per-connection server daemon (10.0.0.1:48040). Aug 5 21:47:35.474018 systemd-logind[1422]: Removed session 4. Aug 5 21:47:35.507351 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 48040 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:47:35.508468 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:35.512160 systemd-logind[1422]: New session 5 of user core. Aug 5 21:47:35.525307 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 21:47:35.584961 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 21:47:35.585225 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:47:35.601729 sudo[1581]: pam_unix(sudo:session): session closed for user root Aug 5 21:47:35.603402 sshd[1576]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:35.614431 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:48040.service: Deactivated successfully. Aug 5 21:47:35.617396 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 21:47:35.618669 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Aug 5 21:47:35.630335 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:48052.service - OpenSSH per-connection server daemon (10.0.0.1:48052). Aug 5 21:47:35.631090 systemd-logind[1422]: Removed session 5. Aug 5 21:47:35.665198 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 48052 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:47:35.666498 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:35.670056 systemd-logind[1422]: New session 6 of user core. Aug 5 21:47:35.686229 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 21:47:35.735865 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 21:47:35.736149 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:47:35.739557 sudo[1590]: pam_unix(sudo:session): session closed for user root Aug 5 21:47:35.744046 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 21:47:35.744319 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:47:35.760368 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 21:47:35.761549 auditctl[1593]: No rules Aug 5 21:47:35.762398 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 21:47:35.762607 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 21:47:35.764237 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:47:35.786906 augenrules[1611]: No rules Aug 5 21:47:35.789154 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:47:35.790522 sudo[1589]: pam_unix(sudo:session): session closed for user root Aug 5 21:47:35.792265 sshd[1586]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:35.804344 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:48052.service: Deactivated successfully. Aug 5 21:47:35.805789 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 21:47:35.807012 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Aug 5 21:47:35.808179 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:48060.service - OpenSSH per-connection server daemon (10.0.0.1:48060). Aug 5 21:47:35.808890 systemd-logind[1422]: Removed session 6. Aug 5 21:47:35.845889 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 48060 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:47:35.847064 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:35.851086 systemd-logind[1422]: New session 7 of user core. Aug 5 21:47:35.860210 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 21:47:35.909912 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 21:47:35.910175 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:47:36.017395 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 21:47:36.017474 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 21:47:36.240180 dockerd[1633]: time="2024-08-05T21:47:36.240117366Z" level=info msg="Starting up" Aug 5 21:47:36.325107 dockerd[1633]: time="2024-08-05T21:47:36.325000501Z" level=info msg="Loading containers: start." Aug 5 21:47:36.412102 kernel: Initializing XFRM netlink socket Aug 5 21:47:36.469946 systemd-networkd[1378]: docker0: Link UP Aug 5 21:47:36.489391 dockerd[1633]: time="2024-08-05T21:47:36.489349617Z" level=info msg="Loading containers: done." Aug 5 21:47:36.546869 dockerd[1633]: time="2024-08-05T21:47:36.546809329Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 21:47:36.547054 dockerd[1633]: time="2024-08-05T21:47:36.547024066Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 21:47:36.547183 dockerd[1633]: time="2024-08-05T21:47:36.547156778Z" level=info msg="Daemon has completed initialization" Aug 5 21:47:36.571124 dockerd[1633]: time="2024-08-05T21:47:36.570956366Z" level=info msg="API listen on /run/docker.sock" Aug 5 21:47:36.571684 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 21:47:37.179359 containerd[1442]: time="2024-08-05T21:47:37.179294745Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\"" Aug 5 21:47:37.824342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800001693.mount: Deactivated successfully. Aug 5 21:47:39.430894 containerd[1442]: time="2024-08-05T21:47:39.430828530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:39.431415 containerd[1442]: time="2024-08-05T21:47:39.431372680Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.7: active requests=0, bytes read=32285113" Aug 5 21:47:39.432422 containerd[1442]: time="2024-08-05T21:47:39.432375640Z" level=info msg="ImageCreate event name:\"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:39.435497 containerd[1442]: time="2024-08-05T21:47:39.435460211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:39.436610 containerd[1442]: time="2024-08-05T21:47:39.436568956Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.7\" with image id \"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\", size \"32281911\" in 2.257199031s" Aug 5 21:47:39.436610 containerd[1442]: time="2024-08-05T21:47:39.436608937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\" returns image reference \"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\"" Aug 5 21:47:39.456056 containerd[1442]: time="2024-08-05T21:47:39.455975849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\"" Aug 5 21:47:41.697319 containerd[1442]: time="2024-08-05T21:47:41.697257974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:41.698013 containerd[1442]: time="2024-08-05T21:47:41.697986861Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.7: active requests=0, bytes read=29362253" Aug 5 21:47:41.698660 containerd[1442]: time="2024-08-05T21:47:41.698616618Z" level=info msg="ImageCreate event name:\"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:41.701393 containerd[1442]: time="2024-08-05T21:47:41.701341361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:41.702545 containerd[1442]: time="2024-08-05T21:47:41.702469982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.7\" with image id \"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\", size \"30849518\" in 2.246460655s" Aug 5 21:47:41.702545 containerd[1442]: time="2024-08-05T21:47:41.702504083Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\" returns image reference \"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\"" Aug 5 21:47:41.722058 containerd[1442]: time="2024-08-05T21:47:41.722007313Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\"" Aug 5 21:47:41.923946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 21:47:41.932285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:47:42.015961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:47:42.019391 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:47:42.058176 kubelet[1852]: E0805 21:47:42.058105 1852 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:47:42.062582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:47:42.062728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:47:42.993749 containerd[1442]: time="2024-08-05T21:47:42.993695843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:42.994138 containerd[1442]: time="2024-08-05T21:47:42.994116379Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.7: active requests=0, bytes read=15751351" Aug 5 21:47:42.995747 containerd[1442]: time="2024-08-05T21:47:42.995720086Z" level=info msg="ImageCreate event name:\"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:42.999020 containerd[1442]: time="2024-08-05T21:47:42.998966798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:43.000136 containerd[1442]: time="2024-08-05T21:47:43.000106422Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.7\" with image id \"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\", size \"17238634\" in 1.278062816s" Aug 5 21:47:43.000318 containerd[1442]: time="2024-08-05T21:47:43.000220706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\" returns image reference \"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\"" Aug 5 21:47:43.019103 containerd[1442]: time="2024-08-05T21:47:43.019046853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\"" Aug 5 21:47:44.061473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677365250.mount: Deactivated successfully. Aug 5 21:47:44.407038 containerd[1442]: time="2024-08-05T21:47:44.406916416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:44.407621 containerd[1442]: time="2024-08-05T21:47:44.407582473Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.7: active requests=0, bytes read=25251734" Aug 5 21:47:44.408324 containerd[1442]: time="2024-08-05T21:47:44.408295178Z" level=info msg="ImageCreate event name:\"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:44.410229 containerd[1442]: time="2024-08-05T21:47:44.410180851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:44.411250 containerd[1442]: time="2024-08-05T21:47:44.411207675Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.7\" with image id \"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\", repo tag \"registry.k8s.io/kube-proxy:v1.29.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\", size \"25250751\" in 1.392111585s" Aug 5 21:47:44.411287 containerd[1442]: time="2024-08-05T21:47:44.411255833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\" returns image reference \"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\"" Aug 5 21:47:44.429473 containerd[1442]: time="2024-08-05T21:47:44.429444515Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 21:47:45.029659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291411560.mount: Deactivated successfully. Aug 5 21:47:45.779337 containerd[1442]: time="2024-08-05T21:47:45.779267339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:45.780012 containerd[1442]: time="2024-08-05T21:47:45.779957285Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Aug 5 21:47:45.780656 containerd[1442]: time="2024-08-05T21:47:45.780611931Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:45.783293 containerd[1442]: time="2024-08-05T21:47:45.783259569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:45.784455 containerd[1442]: time="2024-08-05T21:47:45.784409678Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.354930081s" Aug 5 21:47:45.784490 containerd[1442]: time="2024-08-05T21:47:45.784456319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Aug 5 21:47:45.804217 containerd[1442]: time="2024-08-05T21:47:45.804182553Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 21:47:46.304921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158362421.mount: Deactivated successfully. Aug 5 21:47:46.310012 containerd[1442]: time="2024-08-05T21:47:46.309237999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:46.310012 containerd[1442]: time="2024-08-05T21:47:46.309980274Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Aug 5 21:47:46.310732 containerd[1442]: time="2024-08-05T21:47:46.310701723Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:46.313855 containerd[1442]: time="2024-08-05T21:47:46.313821707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:46.314537 containerd[1442]: time="2024-08-05T21:47:46.314506122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 510.284365ms" Aug 5 21:47:46.314636 containerd[1442]: time="2024-08-05T21:47:46.314619056Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 21:47:46.332359 containerd[1442]: time="2024-08-05T21:47:46.332334066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 21:47:46.847419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562116709.mount: Deactivated successfully. Aug 5 21:47:49.058463 containerd[1442]: time="2024-08-05T21:47:49.058411979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:49.059021 containerd[1442]: time="2024-08-05T21:47:49.058985181Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Aug 5 21:47:49.060223 containerd[1442]: time="2024-08-05T21:47:49.060187578Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:49.063305 containerd[1442]: time="2024-08-05T21:47:49.063270222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:47:49.064589 containerd[1442]: time="2024-08-05T21:47:49.064561553Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.732101279s" Aug 5 21:47:49.064647 containerd[1442]: time="2024-08-05T21:47:49.064593218Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 21:47:52.313044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 21:47:52.323318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:47:52.418936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:47:52.424746 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:47:52.468973 kubelet[2072]: E0805 21:47:52.468906 2072 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:47:52.472125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:47:52.472293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:47:53.813330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:47:53.832282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:47:53.847685 systemd[1]: Reloading requested from client PID 2087 ('systemctl') (unit session-7.scope)... Aug 5 21:47:53.847700 systemd[1]: Reloading... Aug 5 21:47:53.920114 zram_generator::config[2122]: No configuration found. Aug 5 21:47:54.002405 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:47:54.055748 systemd[1]: Reloading finished in 207 ms. Aug 5 21:47:54.092350 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:47:54.092416 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:47:54.092605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:47:54.094846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:47:54.293428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:47:54.297542 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:47:54.335592 kubelet[2170]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:47:54.335592 kubelet[2170]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:47:54.335592 kubelet[2170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:47:54.335919 kubelet[2170]: I0805 21:47:54.335644 2170 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:47:55.421859 kubelet[2170]: I0805 21:47:55.421824 2170 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 21:47:55.421859 kubelet[2170]: I0805 21:47:55.421853 2170 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:47:55.422195 kubelet[2170]: I0805 21:47:55.422064 2170 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 21:47:55.481864 kubelet[2170]: E0805 21:47:55.481839 2170 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.482077 kubelet[2170]: I0805 21:47:55.482055 2170 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:47:55.490942 kubelet[2170]: I0805 21:47:55.490920 2170 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:47:55.491873 kubelet[2170]: I0805 21:47:55.491837 2170 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:47:55.492059 kubelet[2170]: I0805 21:47:55.492029 2170 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:47:55.492059 kubelet[2170]: I0805 21:47:55.492051 2170 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:47:55.492059 kubelet[2170]: I0805 21:47:55.492060 2170 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:47:55.493142 kubelet[2170]: I0805 21:47:55.493117 2170 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:47:55.497161 kubelet[2170]: I0805 21:47:55.497137 2170 kubelet.go:396] "Attempting to sync node with API server" Aug 5 21:47:55.497189 kubelet[2170]: I0805 21:47:55.497163 2170 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:47:55.497189 kubelet[2170]: I0805 21:47:55.497183 2170 kubelet.go:312] "Adding apiserver pod source" Aug 5 21:47:55.497232 kubelet[2170]: I0805 21:47:55.497197 2170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:47:55.497812 kubelet[2170]: W0805 21:47:55.497569 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.497812 kubelet[2170]: E0805 21:47:55.497612 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.497812 kubelet[2170]: W0805 21:47:55.497668 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.497812 kubelet[2170]: E0805 21:47:55.497703 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.498729 kubelet[2170]: I0805 21:47:55.498709 2170 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:47:55.499226 kubelet[2170]: I0805 21:47:55.499208 2170 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 21:47:55.499775 kubelet[2170]: W0805 21:47:55.499751 2170 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 21:47:55.500825 kubelet[2170]: I0805 21:47:55.500799 2170 server.go:1256] "Started kubelet" Aug 5 21:47:55.500897 kubelet[2170]: I0805 21:47:55.500877 2170 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:47:55.502117 kubelet[2170]: I0805 21:47:55.502096 2170 server.go:461] "Adding debug handlers to kubelet server" Aug 5 21:47:55.504121 kubelet[2170]: I0805 21:47:55.504090 2170 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:47:55.504912 kubelet[2170]: I0805 21:47:55.504874 2170 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:47:55.504982 kubelet[2170]: I0805 21:47:55.504976 2170 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:47:55.505616 kubelet[2170]: I0805 21:47:55.505025 2170 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:47:55.505616 kubelet[2170]: W0805 21:47:55.505272 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.505616 kubelet[2170]: E0805 21:47:55.505304 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.505616 kubelet[2170]: E0805 21:47:55.505492 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Aug 5 21:47:55.505754 kubelet[2170]: I0805 21:47:55.505628 2170 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 21:47:55.505778 kubelet[2170]: I0805 21:47:55.505772 2170 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:47:55.507134 kubelet[2170]: E0805 21:47:55.507113 2170 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:47:55.507204 kubelet[2170]: I0805 21:47:55.507164 2170 factory.go:221] Registration of the containerd container factory successfully Aug 5 21:47:55.507204 kubelet[2170]: I0805 21:47:55.507176 2170 factory.go:221] Registration of the systemd container factory successfully Aug 5 21:47:55.507273 kubelet[2170]: I0805 21:47:55.507251 2170 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 21:47:55.510018 kubelet[2170]: E0805 21:47:55.509995 2170 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f365adb4e507 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 21:47:55.500774663 +0000 UTC m=+1.199825615,LastTimestamp:2024-08-05 21:47:55.500774663 +0000 UTC m=+1.199825615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 21:47:55.519360 kubelet[2170]: I0805 21:47:55.519343 2170 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:47:55.519454 kubelet[2170]: I0805 21:47:55.519444 2170 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:47:55.519509 kubelet[2170]: I0805 21:47:55.519502 2170 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:47:55.521084 kubelet[2170]: I0805 21:47:55.520957 2170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:47:55.521462 kubelet[2170]: I0805 21:47:55.521446 2170 policy_none.go:49] "None policy: Start" Aug 5 21:47:55.522124 kubelet[2170]: I0805 21:47:55.522002 2170 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 21:47:55.522124 kubelet[2170]: I0805 21:47:55.522040 2170 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:47:55.522238 kubelet[2170]: I0805 21:47:55.522221 2170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:47:55.522266 kubelet[2170]: I0805 21:47:55.522240 2170 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:47:55.522266 kubelet[2170]: I0805 21:47:55.522255 2170 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 21:47:55.522305 kubelet[2170]: E0805 21:47:55.522299 2170 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:47:55.523448 kubelet[2170]: W0805 21:47:55.523359 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.523448 kubelet[2170]: E0805 21:47:55.523401 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:55.528331 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 21:47:55.551742 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 21:47:55.568201 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 21:47:55.569291 kubelet[2170]: I0805 21:47:55.569260 2170 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:47:55.569528 kubelet[2170]: I0805 21:47:55.569503 2170 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:47:55.570333 kubelet[2170]: E0805 21:47:55.570318 2170 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 21:47:55.605920 kubelet[2170]: I0805 21:47:55.605900 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:47:55.606397 kubelet[2170]: E0805 21:47:55.606371 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Aug 5 21:47:55.622600 kubelet[2170]: I0805 21:47:55.622571 2170 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 21:47:55.623342 kubelet[2170]: I0805 21:47:55.623323 2170 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 21:47:55.624057 kubelet[2170]: I0805 21:47:55.624034 2170 topology_manager.go:215] "Topology Admit Handler" podUID="77e57979543c08b73e5e2e94c2c2e7c6" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 21:47:55.629146 systemd[1]: Created slice kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice - libcontainer container kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice. Aug 5 21:47:55.650850 systemd[1]: Created slice kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice - libcontainer container kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice. Aug 5 21:47:55.664571 systemd[1]: Created slice kubepods-burstable-pod77e57979543c08b73e5e2e94c2c2e7c6.slice - libcontainer container kubepods-burstable-pod77e57979543c08b73e5e2e94c2c2e7c6.slice. Aug 5 21:47:55.706143 kubelet[2170]: E0805 21:47:55.706052 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Aug 5 21:47:55.806358 kubelet[2170]: I0805 21:47:55.806271 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:47:55.806358 kubelet[2170]: I0805 21:47:55.806311 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:47:55.806358 kubelet[2170]: I0805 21:47:55.806333 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77e57979543c08b73e5e2e94c2c2e7c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77e57979543c08b73e5e2e94c2c2e7c6\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:47:55.806485 kubelet[2170]: I0805 21:47:55.806406 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:47:55.806485 kubelet[2170]: I0805 21:47:55.806438 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:47:55.806485 kubelet[2170]: I0805 21:47:55.806475 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:47:55.806562 kubelet[2170]: I0805 21:47:55.806497 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 21:47:55.806562 kubelet[2170]: I0805 21:47:55.806558 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77e57979543c08b73e5e2e94c2c2e7c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77e57979543c08b73e5e2e94c2c2e7c6\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:47:55.806605 kubelet[2170]: I0805 21:47:55.806584 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77e57979543c08b73e5e2e94c2c2e7c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77e57979543c08b73e5e2e94c2c2e7c6\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:47:55.807502 kubelet[2170]: I0805 21:47:55.807448 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:47:55.807765 kubelet[2170]: E0805 21:47:55.807741 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Aug 5 21:47:55.949206 kubelet[2170]: E0805 21:47:55.949173 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:55.949656 containerd[1442]: time="2024-08-05T21:47:55.949604782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,}" Aug 5 21:47:55.962855 kubelet[2170]: E0805 21:47:55.962781 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:55.963247 containerd[1442]: time="2024-08-05T21:47:55.963125150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,}" Aug 5 21:47:55.966357 kubelet[2170]: E0805 21:47:55.966339 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:55.966644 containerd[1442]: time="2024-08-05T21:47:55.966617022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77e57979543c08b73e5e2e94c2c2e7c6,Namespace:kube-system,Attempt:0,}" Aug 5 21:47:56.106878 kubelet[2170]: E0805 21:47:56.106855 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Aug 5 21:47:56.209023 kubelet[2170]: I0805 21:47:56.208964 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:47:56.209293 kubelet[2170]: E0805 21:47:56.209268 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Aug 5 21:47:56.336711 kubelet[2170]: E0805 21:47:56.336611 2170 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f365adb4e507 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 21:47:55.500774663 +0000 UTC m=+1.199825615,LastTimestamp:2024-08-05 21:47:55.500774663 +0000 UTC m=+1.199825615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 21:47:56.435173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267060389.mount: Deactivated successfully. Aug 5 21:47:56.440750 containerd[1442]: time="2024-08-05T21:47:56.440696546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:47:56.442044 containerd[1442]: time="2024-08-05T21:47:56.441957784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:47:56.442547 containerd[1442]: time="2024-08-05T21:47:56.442516049Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:47:56.443394 containerd[1442]: time="2024-08-05T21:47:56.443364053Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:47:56.444089 containerd[1442]: time="2024-08-05T21:47:56.444052964Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:47:56.444533 containerd[1442]: time="2024-08-05T21:47:56.444505352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:47:56.445241 containerd[1442]: time="2024-08-05T21:47:56.445010679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 5 21:47:56.448089 containerd[1442]: time="2024-08-05T21:47:56.447451971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:47:56.449844 containerd[1442]: time="2024-08-05T21:47:56.449806446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.119676ms" Aug 5 21:47:56.451681 containerd[1442]: time="2024-08-05T21:47:56.451652718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.457221ms" Aug 5 21:47:56.455372 containerd[1442]: time="2024-08-05T21:47:56.455341945Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 505.642529ms" Aug 5 21:47:56.481312 kubelet[2170]: W0805 21:47:56.481253 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:56.481567 kubelet[2170]: E0805 21:47:56.481317 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:56.575539 kubelet[2170]: W0805 21:47:56.575483 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:56.575539 kubelet[2170]: E0805 21:47:56.575538 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:56.618916 containerd[1442]: time="2024-08-05T21:47:56.618686096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:47:56.618916 containerd[1442]: time="2024-08-05T21:47:56.618753537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:47:56.618916 containerd[1442]: time="2024-08-05T21:47:56.618775951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:47:56.618916 containerd[1442]: time="2024-08-05T21:47:56.618789095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:47:56.619101 containerd[1442]: time="2024-08-05T21:47:56.618810590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:47:56.619101 containerd[1442]: time="2024-08-05T21:47:56.618855577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:47:56.619101 containerd[1442]: time="2024-08-05T21:47:56.618877032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:47:56.619101 containerd[1442]: time="2024-08-05T21:47:56.618890896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:47:56.619700 containerd[1442]: time="2024-08-05T21:47:56.619574293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:47:56.619898 containerd[1442]: time="2024-08-05T21:47:56.619828754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:47:56.619971 containerd[1442]: time="2024-08-05T21:47:56.619870305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:47:56.619971 containerd[1442]: time="2024-08-05T21:47:56.619883690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:47:56.633078 kubelet[2170]: W0805 21:47:56.632859 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:56.633078 kubelet[2170]: E0805 21:47:56.633051 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Aug 5 21:47:56.652326 systemd[1]: Started cri-containerd-2f67a9553c26af6544c806adf68feb5c71c82a3249410c023965ab925ef81b3f.scope - libcontainer container 2f67a9553c26af6544c806adf68feb5c71c82a3249410c023965ab925ef81b3f. Aug 5 21:47:56.653358 systemd[1]: Started cri-containerd-4a4d7841379e7522af22315e3a91add88666140dc6ff06ebe8eecd45072b1fc9.scope - libcontainer container 4a4d7841379e7522af22315e3a91add88666140dc6ff06ebe8eecd45072b1fc9. Aug 5 21:47:56.654327 systemd[1]: Started cri-containerd-5863243992f6b057ce8a471dc06f1f50ece1ef05c73ff152aac5dc0b13554772.scope - libcontainer container 5863243992f6b057ce8a471dc06f1f50ece1ef05c73ff152aac5dc0b13554772. Aug 5 21:47:56.686569 containerd[1442]: time="2024-08-05T21:47:56.686534008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77e57979543c08b73e5e2e94c2c2e7c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f67a9553c26af6544c806adf68feb5c71c82a3249410c023965ab925ef81b3f\"" Aug 5 21:47:56.688138 kubelet[2170]: E0805 21:47:56.688035 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:56.688775 containerd[1442]: time="2024-08-05T21:47:56.688704539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,} returns sandbox id \"5863243992f6b057ce8a471dc06f1f50ece1ef05c73ff152aac5dc0b13554772\"" Aug 5 21:47:56.689530 kubelet[2170]: E0805 21:47:56.689513 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:56.692368 containerd[1442]: time="2024-08-05T21:47:56.692296680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a4d7841379e7522af22315e3a91add88666140dc6ff06ebe8eecd45072b1fc9\"" Aug 5 21:47:56.693324 kubelet[2170]: E0805 21:47:56.693220 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:56.693565 containerd[1442]: time="2024-08-05T21:47:56.693537822Z" level=info msg="CreateContainer within sandbox \"5863243992f6b057ce8a471dc06f1f50ece1ef05c73ff152aac5dc0b13554772\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 21:47:56.693826 containerd[1442]: time="2024-08-05T21:47:56.693609099Z" level=info msg="CreateContainer within sandbox \"2f67a9553c26af6544c806adf68feb5c71c82a3249410c023965ab925ef81b3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 21:47:56.695960 containerd[1442]: time="2024-08-05T21:47:56.695938962Z" level=info msg="CreateContainer within sandbox \"4a4d7841379e7522af22315e3a91add88666140dc6ff06ebe8eecd45072b1fc9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 21:47:56.712799 containerd[1442]: time="2024-08-05T21:47:56.712766438Z" level=info msg="CreateContainer within sandbox \"4a4d7841379e7522af22315e3a91add88666140dc6ff06ebe8eecd45072b1fc9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"94ab3a9e39873e759c3de734acd735b6a0b0d0450d1a67034513ff2640710a86\"" Aug 5 21:47:56.713305 containerd[1442]: time="2024-08-05T21:47:56.713196293Z" level=info msg="CreateContainer within sandbox \"2f67a9553c26af6544c806adf68feb5c71c82a3249410c023965ab925ef81b3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6963612d874086966ef5314d07ccb496803ec8358341a167ca3d3ac5e59d70f2\"" Aug 5 21:47:56.713542 containerd[1442]: time="2024-08-05T21:47:56.713501535Z" level=info msg="StartContainer for \"94ab3a9e39873e759c3de734acd735b6a0b0d0450d1a67034513ff2640710a86\"" Aug 5 21:47:56.713664 containerd[1442]: time="2024-08-05T21:47:56.713521871Z" level=info msg="StartContainer for \"6963612d874086966ef5314d07ccb496803ec8358341a167ca3d3ac5e59d70f2\"" Aug 5 21:47:56.715796 containerd[1442]: time="2024-08-05T21:47:56.715728120Z" level=info msg="CreateContainer within sandbox \"5863243992f6b057ce8a471dc06f1f50ece1ef05c73ff152aac5dc0b13554772\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7fe040661267fb34571e89d8c119d67331e5629752c0e2a3416b9993b9f155e5\"" Aug 5 21:47:56.716232 containerd[1442]: time="2024-08-05T21:47:56.716193533Z" level=info msg="StartContainer for \"7fe040661267fb34571e89d8c119d67331e5629752c0e2a3416b9993b9f155e5\"" Aug 5 21:47:56.747201 systemd[1]: Started cri-containerd-6963612d874086966ef5314d07ccb496803ec8358341a167ca3d3ac5e59d70f2.scope - libcontainer container 6963612d874086966ef5314d07ccb496803ec8358341a167ca3d3ac5e59d70f2. Aug 5 21:47:56.748036 systemd[1]: Started cri-containerd-7fe040661267fb34571e89d8c119d67331e5629752c0e2a3416b9993b9f155e5.scope - libcontainer container 7fe040661267fb34571e89d8c119d67331e5629752c0e2a3416b9993b9f155e5. Aug 5 21:47:56.748867 systemd[1]: Started cri-containerd-94ab3a9e39873e759c3de734acd735b6a0b0d0450d1a67034513ff2640710a86.scope - libcontainer container 94ab3a9e39873e759c3de734acd735b6a0b0d0450d1a67034513ff2640710a86. Aug 5 21:47:56.788129 containerd[1442]: time="2024-08-05T21:47:56.787281520Z" level=info msg="StartContainer for \"6963612d874086966ef5314d07ccb496803ec8358341a167ca3d3ac5e59d70f2\" returns successfully" Aug 5 21:47:56.788129 containerd[1442]: time="2024-08-05T21:47:56.787370336Z" level=info msg="StartContainer for \"7fe040661267fb34571e89d8c119d67331e5629752c0e2a3416b9993b9f155e5\" returns successfully" Aug 5 21:47:56.826561 containerd[1442]: time="2024-08-05T21:47:56.826527266Z" level=info msg="StartContainer for \"94ab3a9e39873e759c3de734acd735b6a0b0d0450d1a67034513ff2640710a86\" returns successfully" Aug 5 21:47:56.912269 kubelet[2170]: E0805 21:47:56.910514 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" Aug 5 21:47:57.010652 kubelet[2170]: I0805 21:47:57.010609 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:47:57.528773 kubelet[2170]: E0805 21:47:57.528738 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:57.533516 kubelet[2170]: E0805 21:47:57.533491 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:57.535868 kubelet[2170]: E0805 21:47:57.535831 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:58.539460 kubelet[2170]: E0805 21:47:58.539430 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:47:58.729035 kubelet[2170]: E0805 21:47:58.728989 2170 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 5 21:47:58.778703 kubelet[2170]: I0805 21:47:58.778666 2170 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 21:47:59.499901 kubelet[2170]: I0805 21:47:59.499828 2170 apiserver.go:52] "Watching apiserver" Aug 5 21:47:59.505356 kubelet[2170]: I0805 21:47:59.505313 2170 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:48:01.462792 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Aug 5 21:48:01.462806 systemd[1]: Reloading... Aug 5 21:48:01.525159 zram_generator::config[2479]: No configuration found. Aug 5 21:48:01.607164 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:48:01.671030 systemd[1]: Reloading finished in 207 ms. Aug 5 21:48:01.704838 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:48:01.719432 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:48:01.719611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:48:01.719727 systemd[1]: kubelet.service: Consumed 1.567s CPU time, 112.5M memory peak, 0B memory swap peak. Aug 5 21:48:01.730309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:48:01.820691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:48:01.824416 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:48:01.866020 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:48:01.866020 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:48:01.866020 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:48:01.866347 kubelet[2521]: I0805 21:48:01.866059 2521 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:48:01.870935 kubelet[2521]: I0805 21:48:01.870902 2521 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 21:48:01.870935 kubelet[2521]: I0805 21:48:01.870933 2521 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:48:01.871188 kubelet[2521]: I0805 21:48:01.871151 2521 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 21:48:01.873295 kubelet[2521]: I0805 21:48:01.873232 2521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 21:48:01.875609 kubelet[2521]: I0805 21:48:01.875459 2521 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:48:01.880139 kubelet[2521]: I0805 21:48:01.880115 2521 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:48:01.880324 kubelet[2521]: I0805 21:48:01.880310 2521 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:48:01.880466 kubelet[2521]: I0805 21:48:01.880452 2521 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:48:01.880552 kubelet[2521]: I0805 21:48:01.880473 2521 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:48:01.880552 kubelet[2521]: I0805 21:48:01.880481 2521 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:48:01.880552 kubelet[2521]: I0805 21:48:01.880528 2521 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:48:01.881447 kubelet[2521]: I0805 21:48:01.880604 2521 kubelet.go:396] "Attempting to sync node with API server" Aug 5 21:48:01.881447 kubelet[2521]: I0805 21:48:01.880617 2521 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:48:01.881447 kubelet[2521]: I0805 21:48:01.880635 2521 kubelet.go:312] "Adding apiserver pod source" Aug 5 21:48:01.881447 kubelet[2521]: I0805 21:48:01.880647 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:48:01.882087 kubelet[2521]: I0805 21:48:01.881667 2521 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:48:01.882087 kubelet[2521]: I0805 21:48:01.881928 2521 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 21:48:01.882627 kubelet[2521]: I0805 21:48:01.882305 2521 server.go:1256] "Started kubelet" Aug 5 21:48:01.882757 kubelet[2521]: I0805 21:48:01.882672 2521 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:48:01.883026 kubelet[2521]: I0805 21:48:01.882863 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 21:48:01.883138 kubelet[2521]: I0805 21:48:01.883059 2521 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:48:01.884450 kubelet[2521]: I0805 21:48:01.884261 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:48:01.885303 kubelet[2521]: I0805 21:48:01.885135 2521 server.go:461] "Adding debug handlers to kubelet server" Aug 5 21:48:01.885951 kubelet[2521]: E0805 21:48:01.885809 2521 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:48:01.885951 kubelet[2521]: I0805 21:48:01.885866 2521 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:48:01.885951 kubelet[2521]: I0805 21:48:01.885932 2521 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:48:01.893238 kubelet[2521]: I0805 21:48:01.893048 2521 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:48:01.897288 kubelet[2521]: I0805 21:48:01.897256 2521 factory.go:221] Registration of the systemd container factory successfully Aug 5 21:48:01.897381 kubelet[2521]: I0805 21:48:01.897364 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 21:48:01.907787 kubelet[2521]: I0805 21:48:01.907506 2521 factory.go:221] Registration of the containerd container factory successfully Aug 5 21:48:01.913608 kubelet[2521]: E0805 21:48:01.910460 2521 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:48:01.914558 kubelet[2521]: I0805 21:48:01.914458 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:48:01.915428 kubelet[2521]: I0805 21:48:01.915411 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:48:01.915594 kubelet[2521]: I0805 21:48:01.915581 2521 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:48:01.915657 kubelet[2521]: I0805 21:48:01.915648 2521 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 21:48:01.915751 kubelet[2521]: E0805 21:48:01.915739 2521 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:48:01.921097 sudo[2546]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 5 21:48:01.921344 sudo[2546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 5 21:48:01.939556 kubelet[2521]: I0805 21:48:01.939525 2521 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:48:01.939556 kubelet[2521]: I0805 21:48:01.939555 2521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:48:01.939663 kubelet[2521]: I0805 21:48:01.939573 2521 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:48:01.939738 kubelet[2521]: I0805 21:48:01.939722 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 21:48:01.939767 kubelet[2521]: I0805 21:48:01.939749 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 21:48:01.939767 kubelet[2521]: I0805 21:48:01.939757 2521 policy_none.go:49] "None policy: Start" Aug 5 21:48:01.940521 kubelet[2521]: I0805 21:48:01.940500 2521 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 21:48:01.940521 kubelet[2521]: I0805 21:48:01.940530 2521 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:48:01.940764 kubelet[2521]: I0805 21:48:01.940744 2521 state_mem.go:75] "Updated machine memory state" Aug 5 21:48:01.945524 kubelet[2521]: I0805 21:48:01.945505 2521 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:48:01.945725 kubelet[2521]: I0805 21:48:01.945708 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:48:01.992312 kubelet[2521]: I0805 21:48:01.992229 2521 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:48:01.997758 kubelet[2521]: I0805 21:48:01.997727 2521 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Aug 5 21:48:01.997855 kubelet[2521]: I0805 21:48:01.997805 2521 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 21:48:02.016281 kubelet[2521]: I0805 21:48:02.016239 2521 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 21:48:02.016388 kubelet[2521]: I0805 21:48:02.016329 2521 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 21:48:02.016388 kubelet[2521]: I0805 21:48:02.016379 2521 topology_manager.go:215] "Topology Admit Handler" podUID="77e57979543c08b73e5e2e94c2c2e7c6" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 21:48:02.093455 kubelet[2521]: I0805 21:48:02.093338 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:48:02.093455 kubelet[2521]: I0805 21:48:02.093386 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:48:02.093653 kubelet[2521]: I0805 21:48:02.093482 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:48:02.093653 kubelet[2521]: I0805 21:48:02.093512 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:48:02.093653 kubelet[2521]: I0805 21:48:02.093619 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 21:48:02.093729 kubelet[2521]: I0805 21:48:02.093673 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77e57979543c08b73e5e2e94c2c2e7c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77e57979543c08b73e5e2e94c2c2e7c6\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:48:02.093729 kubelet[2521]: I0805 21:48:02.093694 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:48:02.093729 kubelet[2521]: I0805 21:48:02.093713 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77e57979543c08b73e5e2e94c2c2e7c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77e57979543c08b73e5e2e94c2c2e7c6\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:48:02.093791 kubelet[2521]: I0805 21:48:02.093737 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77e57979543c08b73e5e2e94c2c2e7c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77e57979543c08b73e5e2e94c2c2e7c6\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:48:02.333113 kubelet[2521]: E0805 21:48:02.331854 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:02.334769 kubelet[2521]: E0805 21:48:02.334352 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:02.334769 kubelet[2521]: E0805 21:48:02.334573 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:02.363762 sudo[2546]: pam_unix(sudo:session): session closed for user root Aug 5 21:48:02.881342 kubelet[2521]: I0805 21:48:02.881296 2521 apiserver.go:52] "Watching apiserver" Aug 5 21:48:02.886838 kubelet[2521]: I0805 21:48:02.886813 2521 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:48:02.928498 kubelet[2521]: E0805 21:48:02.928135 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:02.928498 kubelet[2521]: E0805 21:48:02.928439 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:02.929329 kubelet[2521]: E0805 21:48:02.929312 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:02.948030 kubelet[2521]: I0805 21:48:02.947982 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.947944893 podStartE2EDuration="947.944893ms" podCreationTimestamp="2024-08-05 21:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:48:02.947262382 +0000 UTC m=+1.119651772" watchObservedRunningTime="2024-08-05 21:48:02.947944893 +0000 UTC m=+1.120334242" Aug 5 21:48:02.958420 kubelet[2521]: I0805 21:48:02.958385 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.958354193 podStartE2EDuration="958.354193ms" podCreationTimestamp="2024-08-05 21:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:48:02.953529409 +0000 UTC m=+1.125918798" watchObservedRunningTime="2024-08-05 21:48:02.958354193 +0000 UTC m=+1.130743542" Aug 5 21:48:03.929909 kubelet[2521]: E0805 21:48:03.929813 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:04.151811 sudo[1622]: pam_unix(sudo:session): session closed for user root Aug 5 21:48:04.153290 sshd[1619]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:04.157830 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:48060.service: Deactivated successfully. Aug 5 21:48:04.160385 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 21:48:04.160755 systemd[1]: session-7.scope: Consumed 7.174s CPU time, 133.5M memory peak, 0B memory swap peak. Aug 5 21:48:04.161615 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Aug 5 21:48:04.162939 systemd-logind[1422]: Removed session 7. Aug 5 21:48:04.930938 kubelet[2521]: E0805 21:48:04.930897 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:09.108173 kubelet[2521]: E0805 21:48:09.108136 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:09.121077 kubelet[2521]: I0805 21:48:09.121043 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.121012091 podStartE2EDuration="7.121012091s" podCreationTimestamp="2024-08-05 21:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:48:02.958599781 +0000 UTC m=+1.130989130" watchObservedRunningTime="2024-08-05 21:48:09.121012091 +0000 UTC m=+7.293401440" Aug 5 21:48:09.936831 kubelet[2521]: E0805 21:48:09.936802 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:11.822110 kubelet[2521]: E0805 21:48:11.822042 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:11.938131 kubelet[2521]: E0805 21:48:11.938099 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:13.933909 kubelet[2521]: E0805 21:48:13.933844 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:13.943375 kubelet[2521]: E0805 21:48:13.943343 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:14.152801 update_engine[1424]: I0805 21:48:14.152235 1424 update_attempter.cc:509] Updating boot flags... Aug 5 21:48:14.176200 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2612) Aug 5 21:48:14.211371 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2611) Aug 5 21:48:14.243267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2611) Aug 5 21:48:16.438593 kubelet[2521]: I0805 21:48:16.438555 2521 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 21:48:16.439405 kubelet[2521]: I0805 21:48:16.439240 2521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 21:48:16.439448 containerd[1442]: time="2024-08-05T21:48:16.438934066Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 21:48:17.401681 kubelet[2521]: I0805 21:48:17.401641 2521 topology_manager.go:215] "Topology Admit Handler" podUID="f3c7859b-62c8-498b-b93b-82f03641ef0a" podNamespace="kube-system" podName="kube-proxy-blnxf" Aug 5 21:48:17.416110 kubelet[2521]: I0805 21:48:17.415879 2521 topology_manager.go:215] "Topology Admit Handler" podUID="b213878d-d386-423a-8eea-1a919c0565c0" podNamespace="kube-system" podName="cilium-nq5gj" Aug 5 21:48:17.417189 systemd[1]: Created slice kubepods-besteffort-podf3c7859b_62c8_498b_b93b_82f03641ef0a.slice - libcontainer container kubepods-besteffort-podf3c7859b_62c8_498b_b93b_82f03641ef0a.slice. Aug 5 21:48:17.434382 systemd[1]: Created slice kubepods-burstable-podb213878d_d386_423a_8eea_1a919c0565c0.slice - libcontainer container kubepods-burstable-podb213878d_d386_423a_8eea_1a919c0565c0.slice. Aug 5 21:48:17.496225 kubelet[2521]: I0805 21:48:17.496186 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-hostproc\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497385 kubelet[2521]: I0805 21:48:17.496250 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-host-proc-sys-net\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497385 kubelet[2521]: I0805 21:48:17.496272 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-host-proc-sys-kernel\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497385 kubelet[2521]: I0805 21:48:17.496295 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cni-path\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497385 kubelet[2521]: I0805 21:48:17.496315 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqv8c\" (UniqueName: \"kubernetes.io/projected/f3c7859b-62c8-498b-b93b-82f03641ef0a-kube-api-access-rqv8c\") pod \"kube-proxy-blnxf\" (UID: \"f3c7859b-62c8-498b-b93b-82f03641ef0a\") " pod="kube-system/kube-proxy-blnxf" Aug 5 21:48:17.497385 kubelet[2521]: I0805 21:48:17.496335 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b213878d-d386-423a-8eea-1a919c0565c0-hubble-tls\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497505 kubelet[2521]: I0805 21:48:17.496355 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3c7859b-62c8-498b-b93b-82f03641ef0a-kube-proxy\") pod \"kube-proxy-blnxf\" (UID: \"f3c7859b-62c8-498b-b93b-82f03641ef0a\") " pod="kube-system/kube-proxy-blnxf" Aug 5 21:48:17.497505 kubelet[2521]: I0805 21:48:17.496374 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-etc-cni-netd\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497505 kubelet[2521]: I0805 21:48:17.496391 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-xtables-lock\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497505 kubelet[2521]: I0805 21:48:17.496410 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cilium-run\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497505 kubelet[2521]: I0805 21:48:17.496439 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-bpf-maps\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497505 kubelet[2521]: I0805 21:48:17.496459 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3c7859b-62c8-498b-b93b-82f03641ef0a-xtables-lock\") pod \"kube-proxy-blnxf\" (UID: \"f3c7859b-62c8-498b-b93b-82f03641ef0a\") " pod="kube-system/kube-proxy-blnxf" Aug 5 21:48:17.497625 kubelet[2521]: I0805 21:48:17.496477 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3c7859b-62c8-498b-b93b-82f03641ef0a-lib-modules\") pod \"kube-proxy-blnxf\" (UID: \"f3c7859b-62c8-498b-b93b-82f03641ef0a\") " pod="kube-system/kube-proxy-blnxf" Aug 5 21:48:17.497625 kubelet[2521]: I0805 21:48:17.496495 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cilium-cgroup\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497625 kubelet[2521]: I0805 21:48:17.496503 2521 topology_manager.go:215] "Topology Admit Handler" podUID="2ddacdb0-acab-4a57-ae2d-86f91ba20009" podNamespace="kube-system" podName="cilium-operator-5cc964979-t9xqf" Aug 5 21:48:17.497625 kubelet[2521]: I0805 21:48:17.496513 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-lib-modules\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497625 kubelet[2521]: I0805 21:48:17.496706 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b213878d-d386-423a-8eea-1a919c0565c0-clustermesh-secrets\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497625 kubelet[2521]: I0805 21:48:17.496728 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2j7p\" (UniqueName: \"kubernetes.io/projected/b213878d-d386-423a-8eea-1a919c0565c0-kube-api-access-q2j7p\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.497749 kubelet[2521]: I0805 21:48:17.496747 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b213878d-d386-423a-8eea-1a919c0565c0-cilium-config-path\") pod \"cilium-nq5gj\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " pod="kube-system/cilium-nq5gj" Aug 5 21:48:17.505091 systemd[1]: Created slice kubepods-besteffort-pod2ddacdb0_acab_4a57_ae2d_86f91ba20009.slice - libcontainer container kubepods-besteffort-pod2ddacdb0_acab_4a57_ae2d_86f91ba20009.slice. Aug 5 21:48:17.597926 kubelet[2521]: I0805 21:48:17.597851 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ddacdb0-acab-4a57-ae2d-86f91ba20009-cilium-config-path\") pod \"cilium-operator-5cc964979-t9xqf\" (UID: \"2ddacdb0-acab-4a57-ae2d-86f91ba20009\") " pod="kube-system/cilium-operator-5cc964979-t9xqf" Aug 5 21:48:17.597926 kubelet[2521]: I0805 21:48:17.597911 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqfnf\" (UniqueName: \"kubernetes.io/projected/2ddacdb0-acab-4a57-ae2d-86f91ba20009-kube-api-access-jqfnf\") pod \"cilium-operator-5cc964979-t9xqf\" (UID: \"2ddacdb0-acab-4a57-ae2d-86f91ba20009\") " pod="kube-system/cilium-operator-5cc964979-t9xqf" Aug 5 21:48:17.729829 kubelet[2521]: E0805 21:48:17.729719 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:17.731085 containerd[1442]: time="2024-08-05T21:48:17.730295502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blnxf,Uid:f3c7859b-62c8-498b-b93b-82f03641ef0a,Namespace:kube-system,Attempt:0,}" Aug 5 21:48:17.743034 kubelet[2521]: E0805 21:48:17.742991 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:17.744582 containerd[1442]: time="2024-08-05T21:48:17.743675655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nq5gj,Uid:b213878d-d386-423a-8eea-1a919c0565c0,Namespace:kube-system,Attempt:0,}" Aug 5 21:48:17.752543 containerd[1442]: time="2024-08-05T21:48:17.752350260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:48:17.752543 containerd[1442]: time="2024-08-05T21:48:17.752399826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:17.752543 containerd[1442]: time="2024-08-05T21:48:17.752413788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:48:17.752543 containerd[1442]: time="2024-08-05T21:48:17.752433350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:17.770234 systemd[1]: Started cri-containerd-78781ea1d15d4e95460898ee367421091ec8756e936c9aa88671d7fcd1b95d83.scope - libcontainer container 78781ea1d15d4e95460898ee367421091ec8756e936c9aa88671d7fcd1b95d83. Aug 5 21:48:17.773507 containerd[1442]: time="2024-08-05T21:48:17.772414409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:48:17.773507 containerd[1442]: time="2024-08-05T21:48:17.772532864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:17.773507 containerd[1442]: time="2024-08-05T21:48:17.772553066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:48:17.773507 containerd[1442]: time="2024-08-05T21:48:17.772567228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:17.794304 systemd[1]: Started cri-containerd-bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1.scope - libcontainer container bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1. Aug 5 21:48:17.803760 containerd[1442]: time="2024-08-05T21:48:17.803671837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blnxf,Uid:f3c7859b-62c8-498b-b93b-82f03641ef0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"78781ea1d15d4e95460898ee367421091ec8756e936c9aa88671d7fcd1b95d83\"" Aug 5 21:48:17.804614 kubelet[2521]: E0805 21:48:17.804515 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:17.806881 containerd[1442]: time="2024-08-05T21:48:17.806758383Z" level=info msg="CreateContainer within sandbox \"78781ea1d15d4e95460898ee367421091ec8756e936c9aa88671d7fcd1b95d83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 21:48:17.809499 kubelet[2521]: E0805 21:48:17.809474 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:17.811329 containerd[1442]: time="2024-08-05T21:48:17.811117568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-t9xqf,Uid:2ddacdb0-acab-4a57-ae2d-86f91ba20009,Namespace:kube-system,Attempt:0,}" Aug 5 21:48:17.819750 containerd[1442]: time="2024-08-05T21:48:17.819718764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nq5gj,Uid:b213878d-d386-423a-8eea-1a919c0565c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\"" Aug 5 21:48:17.820452 kubelet[2521]: E0805 21:48:17.820432 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:17.823231 containerd[1442]: time="2024-08-05T21:48:17.823181157Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 5 21:48:17.843944 containerd[1442]: time="2024-08-05T21:48:17.843858062Z" level=info msg="CreateContainer within sandbox \"78781ea1d15d4e95460898ee367421091ec8756e936c9aa88671d7fcd1b95d83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"25f5c02eb471ef17d9bb3888e52fba615e7c789d9728ea100256c44519f138f2\"" Aug 5 21:48:17.845183 containerd[1442]: time="2024-08-05T21:48:17.845156065Z" level=info msg="StartContainer for \"25f5c02eb471ef17d9bb3888e52fba615e7c789d9728ea100256c44519f138f2\"" Aug 5 21:48:17.851127 containerd[1442]: time="2024-08-05T21:48:17.850919065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:48:17.851127 containerd[1442]: time="2024-08-05T21:48:17.850967151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:17.851127 containerd[1442]: time="2024-08-05T21:48:17.850980953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:48:17.851127 containerd[1442]: time="2024-08-05T21:48:17.850997955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:17.872467 systemd[1]: Started cri-containerd-25f5c02eb471ef17d9bb3888e52fba615e7c789d9728ea100256c44519f138f2.scope - libcontainer container 25f5c02eb471ef17d9bb3888e52fba615e7c789d9728ea100256c44519f138f2. Aug 5 21:48:17.874125 systemd[1]: Started cri-containerd-9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3.scope - libcontainer container 9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3. Aug 5 21:48:17.906805 containerd[1442]: time="2024-08-05T21:48:17.906745326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-t9xqf,Uid:2ddacdb0-acab-4a57-ae2d-86f91ba20009,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3\"" Aug 5 21:48:17.906805 containerd[1442]: time="2024-08-05T21:48:17.906755127Z" level=info msg="StartContainer for \"25f5c02eb471ef17d9bb3888e52fba615e7c789d9728ea100256c44519f138f2\" returns successfully" Aug 5 21:48:17.911594 kubelet[2521]: E0805 21:48:17.910414 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:17.951094 kubelet[2521]: E0805 21:48:17.951044 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:17.961113 kubelet[2521]: I0805 21:48:17.961003 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-blnxf" podStartSLOduration=0.960966386 podStartE2EDuration="960.966386ms" podCreationTimestamp="2024-08-05 21:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:48:17.960900418 +0000 UTC m=+16.133289887" watchObservedRunningTime="2024-08-05 21:48:17.960966386 +0000 UTC m=+16.133355735" Aug 5 21:48:25.323566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755035935.mount: Deactivated successfully. Aug 5 21:48:27.456120 containerd[1442]: time="2024-08-05T21:48:27.455871226Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:48:27.494670 containerd[1442]: time="2024-08-05T21:48:27.494605977Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651538" Aug 5 21:48:27.519615 containerd[1442]: time="2024-08-05T21:48:27.519585943Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:48:27.521372 containerd[1442]: time="2024-08-05T21:48:27.521252077Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.698013753s" Aug 5 21:48:27.521372 containerd[1442]: time="2024-08-05T21:48:27.521296480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 5 21:48:27.530702 containerd[1442]: time="2024-08-05T21:48:27.530673593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 5 21:48:27.533494 containerd[1442]: time="2024-08-05T21:48:27.533464297Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 5 21:48:27.705983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697199883.mount: Deactivated successfully. Aug 5 21:48:27.879243 containerd[1442]: time="2024-08-05T21:48:27.879202983Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\"" Aug 5 21:48:27.879767 containerd[1442]: time="2024-08-05T21:48:27.879648539Z" level=info msg="StartContainer for \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\"" Aug 5 21:48:27.908210 systemd[1]: Started cri-containerd-7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9.scope - libcontainer container 7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9. Aug 5 21:48:27.972453 containerd[1442]: time="2024-08-05T21:48:27.972339022Z" level=info msg="StartContainer for \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\" returns successfully" Aug 5 21:48:27.985555 systemd[1]: cri-containerd-7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9.scope: Deactivated successfully. Aug 5 21:48:28.194463 containerd[1442]: time="2024-08-05T21:48:28.194403180Z" level=info msg="shim disconnected" id=7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9 namespace=k8s.io Aug 5 21:48:28.194463 containerd[1442]: time="2024-08-05T21:48:28.194453503Z" level=warning msg="cleaning up after shim disconnected" id=7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9 namespace=k8s.io Aug 5 21:48:28.194463 containerd[1442]: time="2024-08-05T21:48:28.194462784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:48:28.704452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9-rootfs.mount: Deactivated successfully. Aug 5 21:48:28.998372 kubelet[2521]: E0805 21:48:28.997154 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:29.003076 containerd[1442]: time="2024-08-05T21:48:29.003013698Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 5 21:48:29.083845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639512768.mount: Deactivated successfully. Aug 5 21:48:29.117342 containerd[1442]: time="2024-08-05T21:48:29.117287552Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\"" Aug 5 21:48:29.118013 containerd[1442]: time="2024-08-05T21:48:29.117986644Z" level=info msg="StartContainer for \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\"" Aug 5 21:48:29.148240 systemd[1]: Started cri-containerd-c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d.scope - libcontainer container c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d. Aug 5 21:48:29.175274 containerd[1442]: time="2024-08-05T21:48:29.174781506Z" level=info msg="StartContainer for \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\" returns successfully" Aug 5 21:48:29.194477 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:48:29.194927 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:48:29.195122 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:48:29.202385 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:48:29.202673 systemd[1]: cri-containerd-c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d.scope: Deactivated successfully. Aug 5 21:48:29.241340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:48:29.278142 containerd[1442]: time="2024-08-05T21:48:29.277814444Z" level=info msg="shim disconnected" id=c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d namespace=k8s.io Aug 5 21:48:29.278142 containerd[1442]: time="2024-08-05T21:48:29.277865368Z" level=warning msg="cleaning up after shim disconnected" id=c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d namespace=k8s.io Aug 5 21:48:29.278142 containerd[1442]: time="2024-08-05T21:48:29.277873129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:48:29.540329 containerd[1442]: time="2024-08-05T21:48:29.540205869Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:48:29.558786 containerd[1442]: time="2024-08-05T21:48:29.558724245Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138342" Aug 5 21:48:29.577814 containerd[1442]: time="2024-08-05T21:48:29.577767141Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:48:29.579235 containerd[1442]: time="2024-08-05T21:48:29.579199767Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.04848813s" Aug 5 21:48:29.579284 containerd[1442]: time="2024-08-05T21:48:29.579243010Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 5 21:48:29.582279 containerd[1442]: time="2024-08-05T21:48:29.582234353Z" level=info msg="CreateContainer within sandbox \"9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 5 21:48:29.754178 containerd[1442]: time="2024-08-05T21:48:29.754115209Z" level=info msg="CreateContainer within sandbox \"9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\"" Aug 5 21:48:29.755234 containerd[1442]: time="2024-08-05T21:48:29.754870745Z" level=info msg="StartContainer for \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\"" Aug 5 21:48:29.783285 systemd[1]: Started cri-containerd-89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388.scope - libcontainer container 89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388. Aug 5 21:48:29.808087 containerd[1442]: time="2024-08-05T21:48:29.807957411Z" level=info msg="StartContainer for \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\" returns successfully" Aug 5 21:48:29.996399 kubelet[2521]: E0805 21:48:29.996346 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:30.001019 kubelet[2521]: E0805 21:48:30.000929 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:30.003367 containerd[1442]: time="2024-08-05T21:48:30.003334768Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 5 21:48:30.052188 containerd[1442]: time="2024-08-05T21:48:30.052040977Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\"" Aug 5 21:48:30.054235 containerd[1442]: time="2024-08-05T21:48:30.052803311Z" level=info msg="StartContainer for \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\"" Aug 5 21:48:30.060106 kubelet[2521]: I0805 21:48:30.059992 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-t9xqf" podStartSLOduration=1.391311073 podStartE2EDuration="13.059949023s" podCreationTimestamp="2024-08-05 21:48:17 +0000 UTC" firstStartedPulling="2024-08-05 21:48:17.910833557 +0000 UTC m=+16.083222866" lastFinishedPulling="2024-08-05 21:48:29.579471467 +0000 UTC m=+27.751860816" observedRunningTime="2024-08-05 21:48:30.030569759 +0000 UTC m=+28.202959108" watchObservedRunningTime="2024-08-05 21:48:30.059949023 +0000 UTC m=+28.232338372" Aug 5 21:48:30.084245 systemd[1]: Started cri-containerd-2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03.scope - libcontainer container 2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03. Aug 5 21:48:30.133805 containerd[1442]: time="2024-08-05T21:48:30.133749509Z" level=info msg="StartContainer for \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\" returns successfully" Aug 5 21:48:30.145389 systemd[1]: cri-containerd-2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03.scope: Deactivated successfully. Aug 5 21:48:30.294601 containerd[1442]: time="2024-08-05T21:48:30.294525224Z" level=info msg="shim disconnected" id=2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03 namespace=k8s.io Aug 5 21:48:30.294601 containerd[1442]: time="2024-08-05T21:48:30.294586228Z" level=warning msg="cleaning up after shim disconnected" id=2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03 namespace=k8s.io Aug 5 21:48:30.294601 containerd[1442]: time="2024-08-05T21:48:30.294595669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:48:30.704446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03-rootfs.mount: Deactivated successfully. Aug 5 21:48:31.021191 kubelet[2521]: E0805 21:48:31.020811 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:31.021191 kubelet[2521]: E0805 21:48:31.020879 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:31.023883 containerd[1442]: time="2024-08-05T21:48:31.022961620Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 5 21:48:31.050364 containerd[1442]: time="2024-08-05T21:48:31.050308310Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\"" Aug 5 21:48:31.053098 containerd[1442]: time="2024-08-05T21:48:31.050724618Z" level=info msg="StartContainer for \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\"" Aug 5 21:48:31.079230 systemd[1]: Started cri-containerd-11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32.scope - libcontainer container 11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32. Aug 5 21:48:31.097625 systemd[1]: cri-containerd-11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32.scope: Deactivated successfully. Aug 5 21:48:31.101339 containerd[1442]: time="2024-08-05T21:48:31.101293552Z" level=info msg="StartContainer for \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\" returns successfully" Aug 5 21:48:31.125121 containerd[1442]: time="2024-08-05T21:48:31.125040472Z" level=info msg="shim disconnected" id=11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32 namespace=k8s.io Aug 5 21:48:31.125121 containerd[1442]: time="2024-08-05T21:48:31.125111197Z" level=warning msg="cleaning up after shim disconnected" id=11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32 namespace=k8s.io Aug 5 21:48:31.125121 containerd[1442]: time="2024-08-05T21:48:31.125120438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:48:31.704527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32-rootfs.mount: Deactivated successfully. Aug 5 21:48:32.014097 kubelet[2521]: E0805 21:48:32.013952 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:32.017326 containerd[1442]: time="2024-08-05T21:48:32.017270310Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 5 21:48:32.044020 containerd[1442]: time="2024-08-05T21:48:32.043961690Z" level=info msg="CreateContainer within sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\"" Aug 5 21:48:32.044466 containerd[1442]: time="2024-08-05T21:48:32.044427161Z" level=info msg="StartContainer for \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\"" Aug 5 21:48:32.081228 systemd[1]: Started cri-containerd-a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c.scope - libcontainer container a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c. Aug 5 21:48:32.107714 containerd[1442]: time="2024-08-05T21:48:32.107658579Z" level=info msg="StartContainer for \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\" returns successfully" Aug 5 21:48:32.273283 kubelet[2521]: I0805 21:48:32.273172 2521 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 21:48:32.296471 kubelet[2521]: I0805 21:48:32.296423 2521 topology_manager.go:215] "Topology Admit Handler" podUID="dd939507-734f-4ee3-918d-0bce2767cee9" podNamespace="kube-system" podName="coredns-76f75df574-rpdpj" Aug 5 21:48:32.300745 kubelet[2521]: I0805 21:48:32.300705 2521 topology_manager.go:215] "Topology Admit Handler" podUID="d38bd9f6-90b8-4a78-a818-c3b961dd904c" podNamespace="kube-system" podName="coredns-76f75df574-bhfhn" Aug 5 21:48:32.309417 systemd[1]: Created slice kubepods-burstable-poddd939507_734f_4ee3_918d_0bce2767cee9.slice - libcontainer container kubepods-burstable-poddd939507_734f_4ee3_918d_0bce2767cee9.slice. Aug 5 21:48:32.311461 kubelet[2521]: I0805 21:48:32.311425 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd939507-734f-4ee3-918d-0bce2767cee9-config-volume\") pod \"coredns-76f75df574-rpdpj\" (UID: \"dd939507-734f-4ee3-918d-0bce2767cee9\") " pod="kube-system/coredns-76f75df574-rpdpj" Aug 5 21:48:32.311461 kubelet[2521]: I0805 21:48:32.311466 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c985s\" (UniqueName: \"kubernetes.io/projected/dd939507-734f-4ee3-918d-0bce2767cee9-kube-api-access-c985s\") pod \"coredns-76f75df574-rpdpj\" (UID: \"dd939507-734f-4ee3-918d-0bce2767cee9\") " pod="kube-system/coredns-76f75df574-rpdpj" Aug 5 21:48:32.320454 systemd[1]: Created slice kubepods-burstable-podd38bd9f6_90b8_4a78_a818_c3b961dd904c.slice - libcontainer container kubepods-burstable-podd38bd9f6_90b8_4a78_a818_c3b961dd904c.slice. Aug 5 21:48:32.412014 kubelet[2521]: I0805 21:48:32.411971 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rln5\" (UniqueName: \"kubernetes.io/projected/d38bd9f6-90b8-4a78-a818-c3b961dd904c-kube-api-access-9rln5\") pod \"coredns-76f75df574-bhfhn\" (UID: \"d38bd9f6-90b8-4a78-a818-c3b961dd904c\") " pod="kube-system/coredns-76f75df574-bhfhn" Aug 5 21:48:32.412014 kubelet[2521]: I0805 21:48:32.412023 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d38bd9f6-90b8-4a78-a818-c3b961dd904c-config-volume\") pod \"coredns-76f75df574-bhfhn\" (UID: \"d38bd9f6-90b8-4a78-a818-c3b961dd904c\") " pod="kube-system/coredns-76f75df574-bhfhn" Aug 5 21:48:32.617598 kubelet[2521]: E0805 21:48:32.617505 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:32.619257 containerd[1442]: time="2024-08-05T21:48:32.619217459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rpdpj,Uid:dd939507-734f-4ee3-918d-0bce2767cee9,Namespace:kube-system,Attempt:0,}" Aug 5 21:48:32.625286 kubelet[2521]: E0805 21:48:32.625228 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:32.626249 containerd[1442]: time="2024-08-05T21:48:32.625931867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bhfhn,Uid:d38bd9f6-90b8-4a78-a818-c3b961dd904c,Namespace:kube-system,Attempt:0,}" Aug 5 21:48:33.030308 kubelet[2521]: E0805 21:48:33.030258 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:33.095701 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:45654.service - OpenSSH per-connection server daemon (10.0.0.1:45654). Aug 5 21:48:33.140188 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 45654 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:33.142536 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:33.148240 systemd-logind[1422]: New session 8 of user core. Aug 5 21:48:33.158307 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 21:48:33.344439 sshd[3380]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:33.348823 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:45654.service: Deactivated successfully. Aug 5 21:48:33.352605 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 21:48:33.353818 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Aug 5 21:48:33.354802 systemd-logind[1422]: Removed session 8. Aug 5 21:48:34.032815 kubelet[2521]: E0805 21:48:34.032451 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:34.409707 systemd-networkd[1378]: cilium_host: Link UP Aug 5 21:48:34.409926 systemd-networkd[1378]: cilium_net: Link UP Aug 5 21:48:34.410085 systemd-networkd[1378]: cilium_net: Gained carrier Aug 5 21:48:34.411125 systemd-networkd[1378]: cilium_host: Gained carrier Aug 5 21:48:34.513821 systemd-networkd[1378]: cilium_vxlan: Link UP Aug 5 21:48:34.513829 systemd-networkd[1378]: cilium_vxlan: Gained carrier Aug 5 21:48:34.802815 systemd-networkd[1378]: cilium_host: Gained IPv6LL Aug 5 21:48:34.908105 kernel: NET: Registered PF_ALG protocol family Aug 5 21:48:35.034695 kubelet[2521]: E0805 21:48:35.034562 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:35.263244 systemd-networkd[1378]: cilium_net: Gained IPv6LL Aug 5 21:48:35.566707 systemd-networkd[1378]: lxc_health: Link UP Aug 5 21:48:35.569270 systemd-networkd[1378]: lxc_health: Gained carrier Aug 5 21:48:35.739147 systemd-networkd[1378]: lxc38d4dfe8c098: Link UP Aug 5 21:48:35.743403 kernel: eth0: renamed from tmpca044 Aug 5 21:48:35.754899 systemd-networkd[1378]: lxc38d4dfe8c098: Gained carrier Aug 5 21:48:35.755033 systemd-networkd[1378]: lxcf8552e7bea6e: Link UP Aug 5 21:48:35.771599 kernel: eth0: renamed from tmp7ad2f Aug 5 21:48:35.771774 kubelet[2521]: I0805 21:48:35.771736 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-nq5gj" podStartSLOduration=9.070777991 podStartE2EDuration="18.77169392s" podCreationTimestamp="2024-08-05 21:48:17 +0000 UTC" firstStartedPulling="2024-08-05 21:48:17.820916434 +0000 UTC m=+15.993305783" lastFinishedPulling="2024-08-05 21:48:27.521832363 +0000 UTC m=+25.694221712" observedRunningTime="2024-08-05 21:48:33.054409767 +0000 UTC m=+31.226799156" watchObservedRunningTime="2024-08-05 21:48:35.77169392 +0000 UTC m=+33.944083269" Aug 5 21:48:35.776567 systemd-networkd[1378]: lxcf8552e7bea6e: Gained carrier Aug 5 21:48:35.839471 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Aug 5 21:48:36.036289 kubelet[2521]: E0805 21:48:36.036245 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:37.119541 systemd-networkd[1378]: lxcf8552e7bea6e: Gained IPv6LL Aug 5 21:48:37.247248 systemd-networkd[1378]: lxc_health: Gained IPv6LL Aug 5 21:48:37.823336 systemd-networkd[1378]: lxc38d4dfe8c098: Gained IPv6LL Aug 5 21:48:38.363864 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:45668.service - OpenSSH per-connection server daemon (10.0.0.1:45668). Aug 5 21:48:38.415735 sshd[3775]: Accepted publickey for core from 10.0.0.1 port 45668 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:38.417317 sshd[3775]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:38.421487 systemd-logind[1422]: New session 9 of user core. Aug 5 21:48:38.430255 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 21:48:38.565816 sshd[3775]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:38.570123 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:45668.service: Deactivated successfully. Aug 5 21:48:38.576230 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 21:48:38.577591 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Aug 5 21:48:38.578822 systemd-logind[1422]: Removed session 9. Aug 5 21:48:39.570783 containerd[1442]: time="2024-08-05T21:48:39.570346870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:48:39.570783 containerd[1442]: time="2024-08-05T21:48:39.570548961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:39.570783 containerd[1442]: time="2024-08-05T21:48:39.570588283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:48:39.570783 containerd[1442]: time="2024-08-05T21:48:39.570620405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:39.570783 containerd[1442]: time="2024-08-05T21:48:39.570679288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:48:39.571538 containerd[1442]: time="2024-08-05T21:48:39.571477371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:39.571670 containerd[1442]: time="2024-08-05T21:48:39.571637660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:48:39.571767 containerd[1442]: time="2024-08-05T21:48:39.571729625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:48:39.593083 systemd[1]: run-containerd-runc-k8s.io-ca0448f9bcb5d8b80841fdb4f63adb09e7916aac43d44a3675bb063dbe13a959-runc.nUAXPb.mount: Deactivated successfully. Aug 5 21:48:39.596578 systemd[1]: run-containerd-runc-k8s.io-7ad2f32be2dafd8ce5ac44cddfa53fb64015e03c625ed17c74234342e04c59c7-runc.cYRRoY.mount: Deactivated successfully. Aug 5 21:48:39.614335 systemd[1]: Started cri-containerd-7ad2f32be2dafd8ce5ac44cddfa53fb64015e03c625ed17c74234342e04c59c7.scope - libcontainer container 7ad2f32be2dafd8ce5ac44cddfa53fb64015e03c625ed17c74234342e04c59c7. Aug 5 21:48:39.615897 systemd[1]: Started cri-containerd-ca0448f9bcb5d8b80841fdb4f63adb09e7916aac43d44a3675bb063dbe13a959.scope - libcontainer container ca0448f9bcb5d8b80841fdb4f63adb09e7916aac43d44a3675bb063dbe13a959. Aug 5 21:48:39.629469 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:48:39.631355 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:48:39.649291 containerd[1442]: time="2024-08-05T21:48:39.649242308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rpdpj,Uid:dd939507-734f-4ee3-918d-0bce2767cee9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ad2f32be2dafd8ce5ac44cddfa53fb64015e03c625ed17c74234342e04c59c7\"" Aug 5 21:48:39.650493 kubelet[2521]: E0805 21:48:39.650468 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:39.653380 containerd[1442]: time="2024-08-05T21:48:39.653343568Z" level=info msg="CreateContainer within sandbox \"7ad2f32be2dafd8ce5ac44cddfa53fb64015e03c625ed17c74234342e04c59c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:48:39.661402 containerd[1442]: time="2024-08-05T21:48:39.661361039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bhfhn,Uid:d38bd9f6-90b8-4a78-a818-c3b961dd904c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca0448f9bcb5d8b80841fdb4f63adb09e7916aac43d44a3675bb063dbe13a959\"" Aug 5 21:48:39.662235 kubelet[2521]: E0805 21:48:39.662215 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:39.668448 containerd[1442]: time="2024-08-05T21:48:39.668286451Z" level=info msg="CreateContainer within sandbox \"ca0448f9bcb5d8b80841fdb4f63adb09e7916aac43d44a3675bb063dbe13a959\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:48:39.683282 containerd[1442]: time="2024-08-05T21:48:39.683227613Z" level=info msg="CreateContainer within sandbox \"7ad2f32be2dafd8ce5ac44cddfa53fb64015e03c625ed17c74234342e04c59c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04e3cd04c4088661240d0208089be6fc7a7892606a3ffa2ebe409799dd732bef\"" Aug 5 21:48:39.683996 containerd[1442]: time="2024-08-05T21:48:39.683968773Z" level=info msg="StartContainer for \"04e3cd04c4088661240d0208089be6fc7a7892606a3ffa2ebe409799dd732bef\"" Aug 5 21:48:39.711278 systemd[1]: Started cri-containerd-04e3cd04c4088661240d0208089be6fc7a7892606a3ffa2ebe409799dd732bef.scope - libcontainer container 04e3cd04c4088661240d0208089be6fc7a7892606a3ffa2ebe409799dd732bef. Aug 5 21:48:39.711496 containerd[1442]: time="2024-08-05T21:48:39.711300081Z" level=info msg="CreateContainer within sandbox \"ca0448f9bcb5d8b80841fdb4f63adb09e7916aac43d44a3675bb063dbe13a959\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e70db19af744c5b9f42d9e9cc9cd88adbbb1c72f006182fa8398959cf797321\"" Aug 5 21:48:39.712662 containerd[1442]: time="2024-08-05T21:48:39.712536627Z" level=info msg="StartContainer for \"4e70db19af744c5b9f42d9e9cc9cd88adbbb1c72f006182fa8398959cf797321\"" Aug 5 21:48:39.755330 systemd[1]: Started cri-containerd-4e70db19af744c5b9f42d9e9cc9cd88adbbb1c72f006182fa8398959cf797321.scope - libcontainer container 4e70db19af744c5b9f42d9e9cc9cd88adbbb1c72f006182fa8398959cf797321. Aug 5 21:48:39.767387 containerd[1442]: time="2024-08-05T21:48:39.762006284Z" level=info msg="StartContainer for \"04e3cd04c4088661240d0208089be6fc7a7892606a3ffa2ebe409799dd732bef\" returns successfully" Aug 5 21:48:39.798146 containerd[1442]: time="2024-08-05T21:48:39.798014058Z" level=info msg="StartContainer for \"4e70db19af744c5b9f42d9e9cc9cd88adbbb1c72f006182fa8398959cf797321\" returns successfully" Aug 5 21:48:40.047267 kubelet[2521]: E0805 21:48:40.047238 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:40.050104 kubelet[2521]: E0805 21:48:40.049987 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:40.058785 kubelet[2521]: I0805 21:48:40.058725 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bhfhn" podStartSLOduration=23.058685737 podStartE2EDuration="23.058685737s" podCreationTimestamp="2024-08-05 21:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:48:40.057319226 +0000 UTC m=+38.229708575" watchObservedRunningTime="2024-08-05 21:48:40.058685737 +0000 UTC m=+38.231075046" Aug 5 21:48:40.071809 kubelet[2521]: I0805 21:48:40.071200 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rpdpj" podStartSLOduration=23.070923537 podStartE2EDuration="23.070923537s" podCreationTimestamp="2024-08-05 21:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:48:40.069359215 +0000 UTC m=+38.241748564" watchObservedRunningTime="2024-08-05 21:48:40.070923537 +0000 UTC m=+38.243312886" Aug 5 21:48:41.051853 kubelet[2521]: E0805 21:48:41.051815 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:41.052324 kubelet[2521]: E0805 21:48:41.052293 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:42.053031 kubelet[2521]: E0805 21:48:42.053005 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:43.579241 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:42728.service - OpenSSH per-connection server daemon (10.0.0.1:42728). Aug 5 21:48:43.621977 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 42728 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:43.623493 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:43.628059 systemd-logind[1422]: New session 10 of user core. Aug 5 21:48:43.637227 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 21:48:43.749085 sshd[3960]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:43.752499 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:42728.service: Deactivated successfully. Aug 5 21:48:43.754420 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 21:48:43.755103 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Aug 5 21:48:43.755834 systemd-logind[1422]: Removed session 10. Aug 5 21:48:45.740642 kubelet[2521]: I0805 21:48:45.740582 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:48:45.741761 kubelet[2521]: E0805 21:48:45.741733 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:46.061166 kubelet[2521]: E0805 21:48:46.060866 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:48:48.769215 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:42736.service - OpenSSH per-connection server daemon (10.0.0.1:42736). Aug 5 21:48:48.808546 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 42736 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:48.809880 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:48.815237 systemd-logind[1422]: New session 11 of user core. Aug 5 21:48:48.821339 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 21:48:48.932816 sshd[3977]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:48.941567 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:42736.service: Deactivated successfully. Aug 5 21:48:48.943116 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 21:48:48.944697 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Aug 5 21:48:48.953327 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:42746.service - OpenSSH per-connection server daemon (10.0.0.1:42746). Aug 5 21:48:48.954328 systemd-logind[1422]: Removed session 11. Aug 5 21:48:48.988266 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 42746 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:48.989454 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:48.993752 systemd-logind[1422]: New session 12 of user core. Aug 5 21:48:49.001233 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 21:48:49.144814 sshd[3992]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:49.158095 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:42746.service: Deactivated successfully. Aug 5 21:48:49.160888 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 21:48:49.163572 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Aug 5 21:48:49.172705 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:42748.service - OpenSSH per-connection server daemon (10.0.0.1:42748). Aug 5 21:48:49.174115 systemd-logind[1422]: Removed session 12. Aug 5 21:48:49.216775 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 42748 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:49.218266 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:49.222848 systemd-logind[1422]: New session 13 of user core. Aug 5 21:48:49.229815 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 21:48:49.340393 sshd[4005]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:49.343629 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:42748.service: Deactivated successfully. Aug 5 21:48:49.345400 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 21:48:49.345976 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Aug 5 21:48:49.346725 systemd-logind[1422]: Removed session 13. Aug 5 21:48:54.352770 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:37396.service - OpenSSH per-connection server daemon (10.0.0.1:37396). Aug 5 21:48:54.392926 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 37396 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:54.393713 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:54.399344 systemd-logind[1422]: New session 14 of user core. Aug 5 21:48:54.405532 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 21:48:54.519081 sshd[4020]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:54.522740 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:37396.service: Deactivated successfully. Aug 5 21:48:54.524800 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 21:48:54.525619 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Aug 5 21:48:54.526437 systemd-logind[1422]: Removed session 14. Aug 5 21:48:59.529648 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:37412.service - OpenSSH per-connection server daemon (10.0.0.1:37412). Aug 5 21:48:59.567925 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 37412 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:59.569155 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:59.572444 systemd-logind[1422]: New session 15 of user core. Aug 5 21:48:59.582219 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 21:48:59.689648 sshd[4034]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:59.700610 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:37412.service: Deactivated successfully. Aug 5 21:48:59.702188 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 21:48:59.703651 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Aug 5 21:48:59.704845 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:37428.service - OpenSSH per-connection server daemon (10.0.0.1:37428). Aug 5 21:48:59.706873 systemd-logind[1422]: Removed session 15. Aug 5 21:48:59.744815 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 37428 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:48:59.745975 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:48:59.749387 systemd-logind[1422]: New session 16 of user core. Aug 5 21:48:59.760211 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 21:48:59.990613 sshd[4048]: pam_unix(sshd:session): session closed for user core Aug 5 21:48:59.996549 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:37428.service: Deactivated successfully. Aug 5 21:49:00.000367 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 21:49:00.001594 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Aug 5 21:49:00.010352 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:37442.service - OpenSSH per-connection server daemon (10.0.0.1:37442). Aug 5 21:49:00.012580 systemd-logind[1422]: Removed session 16. Aug 5 21:49:00.048822 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 37442 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:00.049966 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:00.053936 systemd-logind[1422]: New session 17 of user core. Aug 5 21:49:00.063229 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 21:49:01.335047 sshd[4061]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:01.344814 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:37442.service: Deactivated successfully. Aug 5 21:49:01.348946 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 21:49:01.351141 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Aug 5 21:49:01.365417 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:37446.service - OpenSSH per-connection server daemon (10.0.0.1:37446). Aug 5 21:49:01.368247 systemd-logind[1422]: Removed session 17. Aug 5 21:49:01.406440 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 37446 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:01.407905 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:01.411657 systemd-logind[1422]: New session 18 of user core. Aug 5 21:49:01.424228 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 21:49:01.682574 sshd[4082]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:01.692803 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:37446.service: Deactivated successfully. Aug 5 21:49:01.695722 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 21:49:01.697620 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Aug 5 21:49:01.707660 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Aug 5 21:49:01.708888 systemd-logind[1422]: Removed session 18. Aug 5 21:49:01.745533 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:01.747048 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:01.751432 systemd-logind[1422]: New session 19 of user core. Aug 5 21:49:01.761297 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 21:49:01.871011 sshd[4094]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:01.874446 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:37452.service: Deactivated successfully. Aug 5 21:49:01.876247 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 21:49:01.876791 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Aug 5 21:49:01.877572 systemd-logind[1422]: Removed session 19. Aug 5 21:49:06.882755 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:48362.service - OpenSSH per-connection server daemon (10.0.0.1:48362). Aug 5 21:49:06.922022 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 48362 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:06.923279 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:06.927107 systemd-logind[1422]: New session 20 of user core. Aug 5 21:49:06.933257 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 21:49:07.052277 sshd[4113]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:07.056403 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:48362.service: Deactivated successfully. Aug 5 21:49:07.058126 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 21:49:07.060038 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Aug 5 21:49:07.062028 systemd-logind[1422]: Removed session 20. Aug 5 21:49:12.061882 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:49900.service - OpenSSH per-connection server daemon (10.0.0.1:49900). Aug 5 21:49:12.101968 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 49900 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:12.103739 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:12.107310 systemd-logind[1422]: New session 21 of user core. Aug 5 21:49:12.118227 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 21:49:12.230413 sshd[4127]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:12.233786 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:49900.service: Deactivated successfully. Aug 5 21:49:12.235778 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 21:49:12.238210 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. Aug 5 21:49:12.239582 systemd-logind[1422]: Removed session 21. Aug 5 21:49:17.244610 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:49908.service - OpenSSH per-connection server daemon (10.0.0.1:49908). Aug 5 21:49:17.283562 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 49908 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:17.284790 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:17.289118 systemd-logind[1422]: New session 22 of user core. Aug 5 21:49:17.297272 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 21:49:17.403339 sshd[4143]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:17.406778 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:49908.service: Deactivated successfully. Aug 5 21:49:17.409038 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 21:49:17.409848 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. Aug 5 21:49:17.410610 systemd-logind[1422]: Removed session 22. Aug 5 21:49:21.917350 kubelet[2521]: E0805 21:49:21.917314 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:22.413704 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:44352.service - OpenSSH per-connection server daemon (10.0.0.1:44352). Aug 5 21:49:22.452236 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 44352 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:22.453587 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:22.457154 systemd-logind[1422]: New session 23 of user core. Aug 5 21:49:22.465233 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 21:49:22.569134 sshd[4159]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:22.583618 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:44352.service: Deactivated successfully. Aug 5 21:49:22.587298 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 21:49:22.588770 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. Aug 5 21:49:22.594417 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:44366.service - OpenSSH per-connection server daemon (10.0.0.1:44366). Aug 5 21:49:22.595483 systemd-logind[1422]: Removed session 23. Aug 5 21:49:22.630323 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 44366 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:22.631894 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:22.635582 systemd-logind[1422]: New session 24 of user core. Aug 5 21:49:22.647277 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 21:49:24.201563 containerd[1442]: time="2024-08-05T21:49:24.201519040Z" level=info msg="StopContainer for \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\" with timeout 30 (s)" Aug 5 21:49:24.202586 containerd[1442]: time="2024-08-05T21:49:24.202524685Z" level=info msg="Stop container \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\" with signal terminated" Aug 5 21:49:24.212742 systemd[1]: cri-containerd-89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388.scope: Deactivated successfully. Aug 5 21:49:24.236666 containerd[1442]: time="2024-08-05T21:49:24.236601045Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:49:24.237147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388-rootfs.mount: Deactivated successfully. Aug 5 21:49:24.242296 containerd[1442]: time="2024-08-05T21:49:24.242260031Z" level=info msg="StopContainer for \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\" with timeout 2 (s)" Aug 5 21:49:24.242627 containerd[1442]: time="2024-08-05T21:49:24.242595793Z" level=info msg="Stop container \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\" with signal terminated" Aug 5 21:49:24.246408 containerd[1442]: time="2024-08-05T21:49:24.246349610Z" level=info msg="shim disconnected" id=89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388 namespace=k8s.io Aug 5 21:49:24.246486 containerd[1442]: time="2024-08-05T21:49:24.246403171Z" level=warning msg="cleaning up after shim disconnected" id=89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388 namespace=k8s.io Aug 5 21:49:24.246486 containerd[1442]: time="2024-08-05T21:49:24.246426291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:49:24.250338 systemd-networkd[1378]: lxc_health: Link DOWN Aug 5 21:49:24.250344 systemd-networkd[1378]: lxc_health: Lost carrier Aug 5 21:49:24.263248 containerd[1442]: time="2024-08-05T21:49:24.263208609Z" level=info msg="StopContainer for \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\" returns successfully" Aug 5 21:49:24.264055 containerd[1442]: time="2024-08-05T21:49:24.264018893Z" level=info msg="StopPodSandbox for \"9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3\"" Aug 5 21:49:24.271689 systemd[1]: cri-containerd-a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c.scope: Deactivated successfully. Aug 5 21:49:24.271952 systemd[1]: cri-containerd-a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c.scope: Consumed 6.866s CPU time. Aug 5 21:49:24.272848 containerd[1442]: time="2024-08-05T21:49:24.264182214Z" level=info msg="Container to stop \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:49:24.274715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3-shm.mount: Deactivated successfully. Aug 5 21:49:24.285942 systemd[1]: cri-containerd-9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3.scope: Deactivated successfully. Aug 5 21:49:24.292251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c-rootfs.mount: Deactivated successfully. Aug 5 21:49:24.307605 containerd[1442]: time="2024-08-05T21:49:24.307297256Z" level=info msg="shim disconnected" id=a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c namespace=k8s.io Aug 5 21:49:24.307605 containerd[1442]: time="2024-08-05T21:49:24.307352296Z" level=warning msg="cleaning up after shim disconnected" id=a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c namespace=k8s.io Aug 5 21:49:24.307605 containerd[1442]: time="2024-08-05T21:49:24.307373816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:49:24.309377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3-rootfs.mount: Deactivated successfully. Aug 5 21:49:24.314569 containerd[1442]: time="2024-08-05T21:49:24.313423004Z" level=info msg="shim disconnected" id=9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3 namespace=k8s.io Aug 5 21:49:24.314569 containerd[1442]: time="2024-08-05T21:49:24.313564245Z" level=warning msg="cleaning up after shim disconnected" id=9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3 namespace=k8s.io Aug 5 21:49:24.314569 containerd[1442]: time="2024-08-05T21:49:24.313573565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:49:24.322590 containerd[1442]: time="2024-08-05T21:49:24.322535607Z" level=info msg="StopContainer for \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\" returns successfully" Aug 5 21:49:24.323233 containerd[1442]: time="2024-08-05T21:49:24.323040049Z" level=info msg="StopPodSandbox for \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\"" Aug 5 21:49:24.323233 containerd[1442]: time="2024-08-05T21:49:24.323095090Z" level=info msg="Container to stop \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:49:24.323233 containerd[1442]: time="2024-08-05T21:49:24.323130450Z" level=info msg="Container to stop \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:49:24.323233 containerd[1442]: time="2024-08-05T21:49:24.323140170Z" level=info msg="Container to stop \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:49:24.323233 containerd[1442]: time="2024-08-05T21:49:24.323149210Z" level=info msg="Container to stop \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:49:24.323233 containerd[1442]: time="2024-08-05T21:49:24.323158450Z" level=info msg="Container to stop \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 21:49:24.326276 containerd[1442]: time="2024-08-05T21:49:24.326189904Z" level=info msg="TearDown network for sandbox \"9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3\" successfully" Aug 5 21:49:24.326276 containerd[1442]: time="2024-08-05T21:49:24.326215824Z" level=info msg="StopPodSandbox for \"9ee74728f1849db257ed614bb1d6814dd64745dbe376f70d55202c9218cd39c3\" returns successfully" Aug 5 21:49:24.329594 systemd[1]: cri-containerd-bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1.scope: Deactivated successfully. Aug 5 21:49:24.359585 containerd[1442]: time="2024-08-05T21:49:24.359519260Z" level=info msg="shim disconnected" id=bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1 namespace=k8s.io Aug 5 21:49:24.359585 containerd[1442]: time="2024-08-05T21:49:24.359586821Z" level=warning msg="cleaning up after shim disconnected" id=bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1 namespace=k8s.io Aug 5 21:49:24.359585 containerd[1442]: time="2024-08-05T21:49:24.359596141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:49:24.370840 containerd[1442]: time="2024-08-05T21:49:24.370777113Z" level=warning msg="cleanup warnings time=\"2024-08-05T21:49:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 21:49:24.376581 containerd[1442]: time="2024-08-05T21:49:24.376402939Z" level=info msg="TearDown network for sandbox \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" successfully" Aug 5 21:49:24.376581 containerd[1442]: time="2024-08-05T21:49:24.376439019Z" level=info msg="StopPodSandbox for \"bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1\" returns successfully" Aug 5 21:49:24.448771 kubelet[2521]: I0805 21:49:24.448734 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cilium-cgroup\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449160 kubelet[2521]: I0805 21:49:24.448784 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b213878d-d386-423a-8eea-1a919c0565c0-cilium-config-path\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449160 kubelet[2521]: I0805 21:49:24.448804 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-lib-modules\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449160 kubelet[2521]: I0805 21:49:24.448821 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-host-proc-sys-kernel\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449160 kubelet[2521]: I0805 21:49:24.448841 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-bpf-maps\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449160 kubelet[2521]: I0805 21:49:24.448862 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b213878d-d386-423a-8eea-1a919c0565c0-clustermesh-secrets\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449160 kubelet[2521]: I0805 21:49:24.448879 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-host-proc-sys-net\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449301 kubelet[2521]: I0805 21:49:24.448898 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b213878d-d386-423a-8eea-1a919c0565c0-hubble-tls\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449301 kubelet[2521]: I0805 21:49:24.448915 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-hostproc\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449301 kubelet[2521]: I0805 21:49:24.448931 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cni-path\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449301 kubelet[2521]: I0805 21:49:24.448949 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-xtables-lock\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449301 kubelet[2521]: I0805 21:49:24.448970 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2j7p\" (UniqueName: \"kubernetes.io/projected/b213878d-d386-423a-8eea-1a919c0565c0-kube-api-access-q2j7p\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449301 kubelet[2521]: I0805 21:49:24.448992 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ddacdb0-acab-4a57-ae2d-86f91ba20009-cilium-config-path\") pod \"2ddacdb0-acab-4a57-ae2d-86f91ba20009\" (UID: \"2ddacdb0-acab-4a57-ae2d-86f91ba20009\") " Aug 5 21:49:24.449425 kubelet[2521]: I0805 21:49:24.449012 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqfnf\" (UniqueName: \"kubernetes.io/projected/2ddacdb0-acab-4a57-ae2d-86f91ba20009-kube-api-access-jqfnf\") pod \"2ddacdb0-acab-4a57-ae2d-86f91ba20009\" (UID: \"2ddacdb0-acab-4a57-ae2d-86f91ba20009\") " Aug 5 21:49:24.449425 kubelet[2521]: I0805 21:49:24.449029 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-etc-cni-netd\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449425 kubelet[2521]: I0805 21:49:24.449045 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cilium-run\") pod \"b213878d-d386-423a-8eea-1a919c0565c0\" (UID: \"b213878d-d386-423a-8eea-1a919c0565c0\") " Aug 5 21:49:24.449425 kubelet[2521]: I0805 21:49:24.449126 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.449425 kubelet[2521]: I0805 21:49:24.449169 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.450940 kubelet[2521]: I0805 21:49:24.450910 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b213878d-d386-423a-8eea-1a919c0565c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 21:49:24.451023 kubelet[2521]: I0805 21:49:24.450974 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.451023 kubelet[2521]: I0805 21:49:24.450992 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.451023 kubelet[2521]: I0805 21:49:24.451008 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.451150 kubelet[2521]: I0805 21:49:24.451118 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.451181 kubelet[2521]: I0805 21:49:24.451158 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.451497 kubelet[2521]: I0805 21:49:24.451407 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.454574 kubelet[2521]: I0805 21:49:24.454437 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ddacdb0-acab-4a57-ae2d-86f91ba20009-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2ddacdb0-acab-4a57-ae2d-86f91ba20009" (UID: "2ddacdb0-acab-4a57-ae2d-86f91ba20009"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 21:49:24.454574 kubelet[2521]: I0805 21:49:24.454480 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.454574 kubelet[2521]: I0805 21:49:24.454497 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 21:49:24.456683 kubelet[2521]: I0805 21:49:24.456619 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b213878d-d386-423a-8eea-1a919c0565c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 21:49:24.456979 kubelet[2521]: I0805 21:49:24.456923 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ddacdb0-acab-4a57-ae2d-86f91ba20009-kube-api-access-jqfnf" (OuterVolumeSpecName: "kube-api-access-jqfnf") pod "2ddacdb0-acab-4a57-ae2d-86f91ba20009" (UID: "2ddacdb0-acab-4a57-ae2d-86f91ba20009"). InnerVolumeSpecName "kube-api-access-jqfnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 21:49:24.457039 kubelet[2521]: I0805 21:49:24.456992 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b213878d-d386-423a-8eea-1a919c0565c0-kube-api-access-q2j7p" (OuterVolumeSpecName: "kube-api-access-q2j7p") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "kube-api-access-q2j7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 21:49:24.457891 kubelet[2521]: I0805 21:49:24.457842 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b213878d-d386-423a-8eea-1a919c0565c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b213878d-d386-423a-8eea-1a919c0565c0" (UID: "b213878d-d386-423a-8eea-1a919c0565c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 21:49:24.550537 kubelet[2521]: I0805 21:49:24.550259 2521 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550537 kubelet[2521]: I0805 21:49:24.550313 2521 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b213878d-d386-423a-8eea-1a919c0565c0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550537 kubelet[2521]: I0805 21:49:24.550334 2521 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550537 kubelet[2521]: I0805 21:49:24.550353 2521 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b213878d-d386-423a-8eea-1a919c0565c0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550537 kubelet[2521]: I0805 21:49:24.550370 2521 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550537 kubelet[2521]: I0805 21:49:24.550386 2521 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550537 kubelet[2521]: I0805 21:49:24.550403 2521 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550537 kubelet[2521]: I0805 21:49:24.550423 2521 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q2j7p\" (UniqueName: \"kubernetes.io/projected/b213878d-d386-423a-8eea-1a919c0565c0-kube-api-access-q2j7p\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550844 kubelet[2521]: I0805 21:49:24.550441 2521 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ddacdb0-acab-4a57-ae2d-86f91ba20009-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550844 kubelet[2521]: I0805 21:49:24.550459 2521 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jqfnf\" (UniqueName: \"kubernetes.io/projected/2ddacdb0-acab-4a57-ae2d-86f91ba20009-kube-api-access-jqfnf\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550844 kubelet[2521]: I0805 21:49:24.550469 2521 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550844 kubelet[2521]: I0805 21:49:24.550478 2521 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550844 kubelet[2521]: I0805 21:49:24.550487 2521 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b213878d-d386-423a-8eea-1a919c0565c0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550844 kubelet[2521]: I0805 21:49:24.550496 2521 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550844 kubelet[2521]: I0805 21:49:24.550505 2521 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:24.550844 kubelet[2521]: I0805 21:49:24.550515 2521 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b213878d-d386-423a-8eea-1a919c0565c0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 5 21:49:25.150324 kubelet[2521]: I0805 21:49:25.150275 2521 scope.go:117] "RemoveContainer" containerID="89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388" Aug 5 21:49:25.151426 containerd[1442]: time="2024-08-05T21:49:25.151389349Z" level=info msg="RemoveContainer for \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\"" Aug 5 21:49:25.154356 systemd[1]: Removed slice kubepods-besteffort-pod2ddacdb0_acab_4a57_ae2d_86f91ba20009.slice - libcontainer container kubepods-besteffort-pod2ddacdb0_acab_4a57_ae2d_86f91ba20009.slice. Aug 5 21:49:25.162162 systemd[1]: Removed slice kubepods-burstable-podb213878d_d386_423a_8eea_1a919c0565c0.slice - libcontainer container kubepods-burstable-podb213878d_d386_423a_8eea_1a919c0565c0.slice. Aug 5 21:49:25.162269 systemd[1]: kubepods-burstable-podb213878d_d386_423a_8eea_1a919c0565c0.slice: Consumed 7.029s CPU time. Aug 5 21:49:25.200059 containerd[1442]: time="2024-08-05T21:49:25.199997570Z" level=info msg="RemoveContainer for \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\" returns successfully" Aug 5 21:49:25.200900 kubelet[2521]: I0805 21:49:25.200841 2521 scope.go:117] "RemoveContainer" containerID="89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388" Aug 5 21:49:25.202562 containerd[1442]: time="2024-08-05T21:49:25.201042495Z" level=error msg="ContainerStatus for \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\": not found" Aug 5 21:49:25.202833 kubelet[2521]: E0805 21:49:25.202717 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\": not found" containerID="89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388" Aug 5 21:49:25.202867 kubelet[2521]: I0805 21:49:25.202859 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388"} err="failed to get container status \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\": rpc error: code = NotFound desc = an error occurred when try to find container \"89c0464c87d4aebc8231f82b501513e15816426b654e37da504350224c887388\": not found" Aug 5 21:49:25.202889 kubelet[2521]: I0805 21:49:25.202875 2521 scope.go:117] "RemoveContainer" containerID="a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c" Aug 5 21:49:25.203980 containerd[1442]: time="2024-08-05T21:49:25.203951631Z" level=info msg="RemoveContainer for \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\"" Aug 5 21:49:25.219860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1-rootfs.mount: Deactivated successfully. Aug 5 21:49:25.219960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdfbfcd5028fdff332d59c081ce1c316359530d02fda0712194dd1028110aec1-shm.mount: Deactivated successfully. Aug 5 21:49:25.220020 systemd[1]: var-lib-kubelet-pods-2ddacdb0\x2dacab\x2d4a57\x2dae2d\x2d86f91ba20009-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqfnf.mount: Deactivated successfully. Aug 5 21:49:25.220093 systemd[1]: var-lib-kubelet-pods-b213878d\x2dd386\x2d423a\x2d8eea\x2d1a919c0565c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2j7p.mount: Deactivated successfully. Aug 5 21:49:25.220147 systemd[1]: var-lib-kubelet-pods-b213878d\x2dd386\x2d423a\x2d8eea\x2d1a919c0565c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 5 21:49:25.220197 systemd[1]: var-lib-kubelet-pods-b213878d\x2dd386\x2d423a\x2d8eea\x2d1a919c0565c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 5 21:49:25.241418 containerd[1442]: time="2024-08-05T21:49:25.241367191Z" level=info msg="RemoveContainer for \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\" returns successfully" Aug 5 21:49:25.241719 kubelet[2521]: I0805 21:49:25.241599 2521 scope.go:117] "RemoveContainer" containerID="11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32" Aug 5 21:49:25.242574 containerd[1442]: time="2024-08-05T21:49:25.242525597Z" level=info msg="RemoveContainer for \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\"" Aug 5 21:49:25.245559 containerd[1442]: time="2024-08-05T21:49:25.245522773Z" level=info msg="RemoveContainer for \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\" returns successfully" Aug 5 21:49:25.245816 kubelet[2521]: I0805 21:49:25.245739 2521 scope.go:117] "RemoveContainer" containerID="2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03" Aug 5 21:49:25.246790 containerd[1442]: time="2024-08-05T21:49:25.246763940Z" level=info msg="RemoveContainer for \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\"" Aug 5 21:49:25.249106 containerd[1442]: time="2024-08-05T21:49:25.249054032Z" level=info msg="RemoveContainer for \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\" returns successfully" Aug 5 21:49:25.249293 kubelet[2521]: I0805 21:49:25.249263 2521 scope.go:117] "RemoveContainer" containerID="c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d" Aug 5 21:49:25.250445 containerd[1442]: time="2024-08-05T21:49:25.250337159Z" level=info msg="RemoveContainer for \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\"" Aug 5 21:49:25.254297 containerd[1442]: time="2024-08-05T21:49:25.254259820Z" level=info msg="RemoveContainer for \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\" returns successfully" Aug 5 21:49:25.254570 kubelet[2521]: I0805 21:49:25.254483 2521 scope.go:117] "RemoveContainer" containerID="7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9" Aug 5 21:49:25.256187 containerd[1442]: time="2024-08-05T21:49:25.256163390Z" level=info msg="RemoveContainer for \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\"" Aug 5 21:49:25.258775 containerd[1442]: time="2024-08-05T21:49:25.258730324Z" level=info msg="RemoveContainer for \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\" returns successfully" Aug 5 21:49:25.259091 kubelet[2521]: I0805 21:49:25.259006 2521 scope.go:117] "RemoveContainer" containerID="a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c" Aug 5 21:49:25.259418 containerd[1442]: time="2024-08-05T21:49:25.259301527Z" level=error msg="ContainerStatus for \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\": not found" Aug 5 21:49:25.259910 kubelet[2521]: E0805 21:49:25.259728 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\": not found" containerID="a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c" Aug 5 21:49:25.259910 kubelet[2521]: I0805 21:49:25.259830 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c"} err="failed to get container status \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9b27b0ef326e411a1dead4b762a29b8284d95c1f79d4e8afecc4caead4a3b2c\": not found" Aug 5 21:49:25.259910 kubelet[2521]: I0805 21:49:25.259847 2521 scope.go:117] "RemoveContainer" containerID="11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32" Aug 5 21:49:25.260830 containerd[1442]: time="2024-08-05T21:49:25.260251772Z" level=error msg="ContainerStatus for \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\": not found" Aug 5 21:49:25.260830 containerd[1442]: time="2024-08-05T21:49:25.260557614Z" level=error msg="ContainerStatus for \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\": not found" Aug 5 21:49:25.260924 kubelet[2521]: E0805 21:49:25.260358 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\": not found" containerID="11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32" Aug 5 21:49:25.260924 kubelet[2521]: I0805 21:49:25.260406 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32"} err="failed to get container status \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\": rpc error: code = NotFound desc = an error occurred when try to find container \"11083c938081fb026bbe7ccf9a182a740423690508b608d0c5b2ea262c2f9d32\": not found" Aug 5 21:49:25.260924 kubelet[2521]: I0805 21:49:25.260417 2521 scope.go:117] "RemoveContainer" containerID="2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03" Aug 5 21:49:25.260924 kubelet[2521]: E0805 21:49:25.260679 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\": not found" containerID="2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03" Aug 5 21:49:25.260924 kubelet[2521]: I0805 21:49:25.260718 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03"} err="failed to get container status \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b47b143865d0613627484ebb9460fccfc7d72aff5ae7ce3c7d50f3fd838ab03\": not found" Aug 5 21:49:25.260924 kubelet[2521]: I0805 21:49:25.260727 2521 scope.go:117] "RemoveContainer" containerID="c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d" Aug 5 21:49:25.261114 containerd[1442]: time="2024-08-05T21:49:25.260862295Z" level=error msg="ContainerStatus for \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\": not found" Aug 5 21:49:25.261531 kubelet[2521]: E0805 21:49:25.261394 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\": not found" containerID="c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d" Aug 5 21:49:25.261531 kubelet[2521]: I0805 21:49:25.261519 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d"} err="failed to get container status \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c66b5e2b1602d9bbe32974a33e96314c3d98fb7bc3b9a5bdc764823b92f0c39d\": not found" Aug 5 21:49:25.261791 kubelet[2521]: I0805 21:49:25.261672 2521 scope.go:117] "RemoveContainer" containerID="7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9" Aug 5 21:49:25.261953 containerd[1442]: time="2024-08-05T21:49:25.261824701Z" level=error msg="ContainerStatus for \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\": not found" Aug 5 21:49:25.262007 kubelet[2521]: E0805 21:49:25.261910 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\": not found" containerID="7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9" Aug 5 21:49:25.262007 kubelet[2521]: I0805 21:49:25.261935 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9"} err="failed to get container status \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a8935b99af834e8be4c7e13b866c29ec7e0b44f4e75479291d0d5d0b4a3b4b9\": not found" Aug 5 21:49:25.918734 kubelet[2521]: I0805 21:49:25.918667 2521 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2ddacdb0-acab-4a57-ae2d-86f91ba20009" path="/var/lib/kubelet/pods/2ddacdb0-acab-4a57-ae2d-86f91ba20009/volumes" Aug 5 21:49:25.919355 kubelet[2521]: I0805 21:49:25.919058 2521 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b213878d-d386-423a-8eea-1a919c0565c0" path="/var/lib/kubelet/pods/b213878d-d386-423a-8eea-1a919c0565c0/volumes" Aug 5 21:49:26.163583 sshd[4173]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:26.173959 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:44366.service: Deactivated successfully. Aug 5 21:49:26.176301 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 21:49:26.178134 systemd-logind[1422]: Session 24 logged out. Waiting for processes to exit. Aug 5 21:49:26.189552 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:44380.service - OpenSSH per-connection server daemon (10.0.0.1:44380). Aug 5 21:49:26.191331 systemd-logind[1422]: Removed session 24. Aug 5 21:49:26.233504 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 44380 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:26.234930 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:26.239392 systemd-logind[1422]: New session 25 of user core. Aug 5 21:49:26.248279 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 21:49:26.968308 kubelet[2521]: E0805 21:49:26.968278 2521 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 5 21:49:27.021658 sshd[4333]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:27.029599 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:44380.service: Deactivated successfully. Aug 5 21:49:27.035053 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 21:49:27.037482 systemd-logind[1422]: Session 25 logged out. Waiting for processes to exit. Aug 5 21:49:27.040585 kubelet[2521]: I0805 21:49:27.038728 2521 topology_manager.go:215] "Topology Admit Handler" podUID="afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5" podNamespace="kube-system" podName="cilium-ztqnb" Aug 5 21:49:27.040585 kubelet[2521]: E0805 21:49:27.038786 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b213878d-d386-423a-8eea-1a919c0565c0" containerName="mount-cgroup" Aug 5 21:49:27.040585 kubelet[2521]: E0805 21:49:27.038797 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ddacdb0-acab-4a57-ae2d-86f91ba20009" containerName="cilium-operator" Aug 5 21:49:27.040585 kubelet[2521]: E0805 21:49:27.038804 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b213878d-d386-423a-8eea-1a919c0565c0" containerName="clean-cilium-state" Aug 5 21:49:27.040585 kubelet[2521]: E0805 21:49:27.038811 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b213878d-d386-423a-8eea-1a919c0565c0" containerName="cilium-agent" Aug 5 21:49:27.040585 kubelet[2521]: E0805 21:49:27.038818 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b213878d-d386-423a-8eea-1a919c0565c0" containerName="apply-sysctl-overwrites" Aug 5 21:49:27.040585 kubelet[2521]: E0805 21:49:27.038825 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b213878d-d386-423a-8eea-1a919c0565c0" containerName="mount-bpf-fs" Aug 5 21:49:27.040585 kubelet[2521]: I0805 21:49:27.038846 2521 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ddacdb0-acab-4a57-ae2d-86f91ba20009" containerName="cilium-operator" Aug 5 21:49:27.040585 kubelet[2521]: I0805 21:49:27.038852 2521 memory_manager.go:354] "RemoveStaleState removing state" podUID="b213878d-d386-423a-8eea-1a919c0565c0" containerName="cilium-agent" Aug 5 21:49:27.044343 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:44390.service - OpenSSH per-connection server daemon (10.0.0.1:44390). Aug 5 21:49:27.047010 systemd-logind[1422]: Removed session 25. Aug 5 21:49:27.060360 systemd[1]: Created slice kubepods-burstable-podafcaf2eb_d47e_4009_b3f0_b8a62e5b99b5.slice - libcontainer container kubepods-burstable-podafcaf2eb_d47e_4009_b3f0_b8a62e5b99b5.slice. Aug 5 21:49:27.082873 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 44390 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:27.084821 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:27.088901 systemd-logind[1422]: New session 26 of user core. Aug 5 21:49:27.096238 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 21:49:27.151713 sshd[4346]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:27.163215 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:44390.service: Deactivated successfully. Aug 5 21:49:27.164788 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 21:49:27.166967 systemd-logind[1422]: Session 26 logged out. Waiting for processes to exit. Aug 5 21:49:27.171478 kubelet[2521]: I0805 21:49:27.171441 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-host-proc-sys-kernel\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171548 kubelet[2521]: I0805 21:49:27.171488 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-cilium-run\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171548 kubelet[2521]: I0805 21:49:27.171508 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-lib-modules\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171548 kubelet[2521]: I0805 21:49:27.171529 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-cilium-config-path\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171548 kubelet[2521]: I0805 21:49:27.171548 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-cilium-ipsec-secrets\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171662 kubelet[2521]: I0805 21:49:27.171569 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-hubble-tls\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171662 kubelet[2521]: I0805 21:49:27.171592 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-cilium-cgroup\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171662 kubelet[2521]: I0805 21:49:27.171612 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-cni-path\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171662 kubelet[2521]: I0805 21:49:27.171629 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-etc-cni-netd\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171662 kubelet[2521]: I0805 21:49:27.171657 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-host-proc-sys-net\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171781 kubelet[2521]: I0805 21:49:27.171677 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-bpf-maps\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171781 kubelet[2521]: I0805 21:49:27.171698 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-hostproc\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171781 kubelet[2521]: I0805 21:49:27.171720 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-clustermesh-secrets\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171781 kubelet[2521]: I0805 21:49:27.171739 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-xtables-lock\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.171781 kubelet[2521]: I0805 21:49:27.171759 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hwz\" (UniqueName: \"kubernetes.io/projected/afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5-kube-api-access-p4hwz\") pod \"cilium-ztqnb\" (UID: \"afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5\") " pod="kube-system/cilium-ztqnb" Aug 5 21:49:27.180293 systemd[1]: Started sshd@26-10.0.0.80:22-10.0.0.1:44402.service - OpenSSH per-connection server daemon (10.0.0.1:44402). Aug 5 21:49:27.182478 systemd-logind[1422]: Removed session 26. Aug 5 21:49:27.221825 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 44402 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:49:27.223269 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:49:27.229266 systemd-logind[1422]: New session 27 of user core. Aug 5 21:49:27.240269 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 21:49:27.365016 kubelet[2521]: E0805 21:49:27.364977 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:27.365547 containerd[1442]: time="2024-08-05T21:49:27.365455084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztqnb,Uid:afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5,Namespace:kube-system,Attempt:0,}" Aug 5 21:49:27.393897 containerd[1442]: time="2024-08-05T21:49:27.393777072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:49:27.393897 containerd[1442]: time="2024-08-05T21:49:27.393856673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:49:27.394168 containerd[1442]: time="2024-08-05T21:49:27.393969914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:49:27.394168 containerd[1442]: time="2024-08-05T21:49:27.393992954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:49:27.412283 systemd[1]: Started cri-containerd-e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643.scope - libcontainer container e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643. Aug 5 21:49:27.429231 containerd[1442]: time="2024-08-05T21:49:27.429182827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztqnb,Uid:afcaf2eb-d47e-4009-b3f0-b8a62e5b99b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\"" Aug 5 21:49:27.430385 kubelet[2521]: E0805 21:49:27.430185 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:27.433112 containerd[1442]: time="2024-08-05T21:49:27.432564610Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 5 21:49:27.445385 containerd[1442]: time="2024-08-05T21:49:27.445328775Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e16918fc7aacccfd23c02ea4250d78026efff668c3b88eb65673994ea9006d15\"" Aug 5 21:49:27.447530 containerd[1442]: time="2024-08-05T21:49:27.446506222Z" level=info msg="StartContainer for \"e16918fc7aacccfd23c02ea4250d78026efff668c3b88eb65673994ea9006d15\"" Aug 5 21:49:27.474250 systemd[1]: Started cri-containerd-e16918fc7aacccfd23c02ea4250d78026efff668c3b88eb65673994ea9006d15.scope - libcontainer container e16918fc7aacccfd23c02ea4250d78026efff668c3b88eb65673994ea9006d15. Aug 5 21:49:27.499898 containerd[1442]: time="2024-08-05T21:49:27.499821016Z" level=info msg="StartContainer for \"e16918fc7aacccfd23c02ea4250d78026efff668c3b88eb65673994ea9006d15\" returns successfully" Aug 5 21:49:27.535702 systemd[1]: cri-containerd-e16918fc7aacccfd23c02ea4250d78026efff668c3b88eb65673994ea9006d15.scope: Deactivated successfully. Aug 5 21:49:27.577835 containerd[1442]: time="2024-08-05T21:49:27.577770254Z" level=info msg="shim disconnected" id=e16918fc7aacccfd23c02ea4250d78026efff668c3b88eb65673994ea9006d15 namespace=k8s.io Aug 5 21:49:27.577835 containerd[1442]: time="2024-08-05T21:49:27.577829014Z" level=warning msg="cleaning up after shim disconnected" id=e16918fc7aacccfd23c02ea4250d78026efff668c3b88eb65673994ea9006d15 namespace=k8s.io Aug 5 21:49:27.577835 containerd[1442]: time="2024-08-05T21:49:27.577838374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:49:28.168886 kubelet[2521]: E0805 21:49:28.168830 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:28.175425 containerd[1442]: time="2024-08-05T21:49:28.175309166Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 5 21:49:28.324238 containerd[1442]: time="2024-08-05T21:49:28.324057205Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6\"" Aug 5 21:49:28.325019 containerd[1442]: time="2024-08-05T21:49:28.324634129Z" level=info msg="StartContainer for \"fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6\"" Aug 5 21:49:28.354259 systemd[1]: Started cri-containerd-fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6.scope - libcontainer container fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6. Aug 5 21:49:28.377332 containerd[1442]: time="2024-08-05T21:49:28.377290511Z" level=info msg="StartContainer for \"fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6\" returns successfully" Aug 5 21:49:28.389682 systemd[1]: cri-containerd-fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6.scope: Deactivated successfully. Aug 5 21:49:28.406480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6-rootfs.mount: Deactivated successfully. Aug 5 21:49:28.414284 containerd[1442]: time="2024-08-05T21:49:28.414221498Z" level=info msg="shim disconnected" id=fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6 namespace=k8s.io Aug 5 21:49:28.414488 containerd[1442]: time="2024-08-05T21:49:28.414321979Z" level=warning msg="cleaning up after shim disconnected" id=fb5a58671e4f2acb84737784ecde36f7c9c6f2888c3e9b1a7c8f20d93503eba6 namespace=k8s.io Aug 5 21:49:28.414488 containerd[1442]: time="2024-08-05T21:49:28.414332819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:49:29.174613 kubelet[2521]: E0805 21:49:29.174574 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:29.177614 containerd[1442]: time="2024-08-05T21:49:29.177490576Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 5 21:49:29.200483 containerd[1442]: time="2024-08-05T21:49:29.200433916Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa\"" Aug 5 21:49:29.201095 containerd[1442]: time="2024-08-05T21:49:29.201053801Z" level=info msg="StartContainer for \"f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa\"" Aug 5 21:49:29.241320 systemd[1]: Started cri-containerd-f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa.scope - libcontainer container f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa. Aug 5 21:49:29.268436 systemd[1]: cri-containerd-f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa.scope: Deactivated successfully. Aug 5 21:49:29.270279 containerd[1442]: time="2024-08-05T21:49:29.270234423Z" level=info msg="StartContainer for \"f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa\" returns successfully" Aug 5 21:49:29.289588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa-rootfs.mount: Deactivated successfully. Aug 5 21:49:29.294384 containerd[1442]: time="2024-08-05T21:49:29.294335492Z" level=info msg="shim disconnected" id=f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa namespace=k8s.io Aug 5 21:49:29.294384 containerd[1442]: time="2024-08-05T21:49:29.294384612Z" level=warning msg="cleaning up after shim disconnected" id=f968cfa6fdfd7c4f2ac8c092ea00ad77ec6593eb476ac84973ddbc11ca1b80aa namespace=k8s.io Aug 5 21:49:29.294522 containerd[1442]: time="2024-08-05T21:49:29.294392852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:49:30.180872 kubelet[2521]: E0805 21:49:30.180819 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:30.184169 containerd[1442]: time="2024-08-05T21:49:30.183709491Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 5 21:49:30.214427 containerd[1442]: time="2024-08-05T21:49:30.214377589Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193\"" Aug 5 21:49:30.214995 containerd[1442]: time="2024-08-05T21:49:30.214910473Z" level=info msg="StartContainer for \"9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193\"" Aug 5 21:49:30.249322 systemd[1]: Started cri-containerd-9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193.scope - libcontainer container 9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193. Aug 5 21:49:30.271447 systemd[1]: cri-containerd-9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193.scope: Deactivated successfully. Aug 5 21:49:30.274801 containerd[1442]: time="2024-08-05T21:49:30.274137092Z" level=info msg="StartContainer for \"9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193\" returns successfully" Aug 5 21:49:30.292756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193-rootfs.mount: Deactivated successfully. Aug 5 21:49:30.298370 containerd[1442]: time="2024-08-05T21:49:30.298301135Z" level=info msg="shim disconnected" id=9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193 namespace=k8s.io Aug 5 21:49:30.298370 containerd[1442]: time="2024-08-05T21:49:30.298362415Z" level=warning msg="cleaning up after shim disconnected" id=9ad588573ec50e40c68fdf5c260cecb3986f76e1b7064fb8d2d81ae494ab7193 namespace=k8s.io Aug 5 21:49:30.298370 containerd[1442]: time="2024-08-05T21:49:30.298370976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:49:30.917543 kubelet[2521]: E0805 21:49:30.917513 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:31.184264 kubelet[2521]: E0805 21:49:31.184162 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:31.186408 containerd[1442]: time="2024-08-05T21:49:31.186373671Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 5 21:49:31.207594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184553207.mount: Deactivated successfully. Aug 5 21:49:31.210138 containerd[1442]: time="2024-08-05T21:49:31.210009763Z" level=info msg="CreateContainer within sandbox \"e3cdf8d70f0cf42ce2a24492adc8fb097cb4aa849b087aab192fc42c6956f643\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4061c0c15bdc9d93555a6b44c1c85a8d4d96cd637bb5a81e9d4fc9217a4bdac8\"" Aug 5 21:49:31.211253 containerd[1442]: time="2024-08-05T21:49:31.211224414Z" level=info msg="StartContainer for \"4061c0c15bdc9d93555a6b44c1c85a8d4d96cd637bb5a81e9d4fc9217a4bdac8\"" Aug 5 21:49:31.244784 systemd[1]: Started cri-containerd-4061c0c15bdc9d93555a6b44c1c85a8d4d96cd637bb5a81e9d4fc9217a4bdac8.scope - libcontainer container 4061c0c15bdc9d93555a6b44c1c85a8d4d96cd637bb5a81e9d4fc9217a4bdac8. Aug 5 21:49:31.278025 containerd[1442]: time="2024-08-05T21:49:31.277909212Z" level=info msg="StartContainer for \"4061c0c15bdc9d93555a6b44c1c85a8d4d96cd637bb5a81e9d4fc9217a4bdac8\" returns successfully" Aug 5 21:49:31.629176 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 5 21:49:32.189949 kubelet[2521]: E0805 21:49:32.189907 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:32.206471 kubelet[2521]: I0805 21:49:32.206432 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ztqnb" podStartSLOduration=5.206380492 podStartE2EDuration="5.206380492s" podCreationTimestamp="2024-08-05 21:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:49:32.206316132 +0000 UTC m=+90.378705481" watchObservedRunningTime="2024-08-05 21:49:32.206380492 +0000 UTC m=+90.378769841" Aug 5 21:49:33.368188 kubelet[2521]: E0805 21:49:33.367039 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:33.544642 systemd[1]: run-containerd-runc-k8s.io-4061c0c15bdc9d93555a6b44c1c85a8d4d96cd637bb5a81e9d4fc9217a4bdac8-runc.9oiQzs.mount: Deactivated successfully. Aug 5 21:49:34.524904 systemd-networkd[1378]: lxc_health: Link UP Aug 5 21:49:34.532758 systemd-networkd[1378]: lxc_health: Gained carrier Aug 5 21:49:34.917129 kubelet[2521]: E0805 21:49:34.917042 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:35.367128 kubelet[2521]: E0805 21:49:35.367000 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:35.669393 systemd[1]: run-containerd-runc-k8s.io-4061c0c15bdc9d93555a6b44c1c85a8d4d96cd637bb5a81e9d4fc9217a4bdac8-runc.fZcatg.mount: Deactivated successfully. Aug 5 21:49:36.202434 kubelet[2521]: E0805 21:49:36.202169 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:49:36.511258 systemd-networkd[1378]: lxc_health: Gained IPv6LL Aug 5 21:49:39.991027 sshd[4354]: pam_unix(sshd:session): session closed for user core Aug 5 21:49:39.993934 systemd[1]: sshd@26-10.0.0.80:22-10.0.0.1:44402.service: Deactivated successfully. Aug 5 21:49:39.995765 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 21:49:39.997568 systemd-logind[1422]: Session 27 logged out. Waiting for processes to exit. Aug 5 21:49:39.998703 systemd-logind[1422]: Removed session 27.