Jun 25 18:25:50.911009 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 18:25:50.911030 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:25:50.911039 kernel: KASLR enabled Jun 25 18:25:50.911045 kernel: efi: EFI v2.7 by EDK II Jun 25 18:25:50.911051 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jun 25 18:25:50.911056 kernel: random: crng init done Jun 25 18:25:50.911063 kernel: ACPI: Early table checksum verification disabled Jun 25 18:25:50.911069 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jun 25 18:25:50.911075 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 25 18:25:50.911083 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911089 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911095 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911101 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911107 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911114 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911122 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911128 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911140 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:25:50.911147 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 25 18:25:50.911154 kernel: NUMA: Failed to initialise from firmware Jun 25 18:25:50.911160 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:25:50.911166 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jun 25 18:25:50.911173 kernel: Zone ranges: Jun 25 18:25:50.911179 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:25:50.911185 kernel: DMA32 empty Jun 25 18:25:50.911193 kernel: Normal empty Jun 25 18:25:50.911199 kernel: Movable zone start for each node Jun 25 18:25:50.911206 kernel: Early memory node ranges Jun 25 18:25:50.911212 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jun 25 18:25:50.911219 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jun 25 18:25:50.911225 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jun 25 18:25:50.911231 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jun 25 18:25:50.911237 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jun 25 18:25:50.911244 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jun 25 18:25:50.911250 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jun 25 18:25:50.911256 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:25:50.911262 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 25 18:25:50.911270 kernel: psci: probing for conduit method from ACPI. Jun 25 18:25:50.911277 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 18:25:50.911283 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:25:50.911292 kernel: psci: Trusted OS migration not required Jun 25 18:25:50.911299 kernel: psci: SMC Calling Convention v1.1 Jun 25 18:25:50.911305 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 25 18:25:50.911313 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:25:50.911320 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:25:50.911327 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 25 18:25:50.911334 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:25:50.911340 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:25:50.911347 kernel: CPU features: detected: Hardware dirty bit management Jun 25 18:25:50.911354 kernel: CPU features: detected: Spectre-v4 Jun 25 18:25:50.911360 kernel: CPU features: detected: Spectre-BHB Jun 25 18:25:50.911367 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 18:25:50.911374 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 18:25:50.911382 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 18:25:50.911388 kernel: alternatives: applying boot alternatives Jun 25 18:25:50.911396 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:25:50.911403 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:25:50.911410 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:25:50.911416 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:25:50.911423 kernel: Fallback order for Node 0: 0 Jun 25 18:25:50.911430 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jun 25 18:25:50.911436 kernel: Policy zone: DMA Jun 25 18:25:50.911443 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:25:50.911450 kernel: software IO TLB: area num 4. Jun 25 18:25:50.911458 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jun 25 18:25:50.911465 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jun 25 18:25:50.911472 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:25:50.911479 kernel: trace event string verifier disabled Jun 25 18:25:50.911491 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:25:50.911499 kernel: rcu: RCU event tracing is enabled. Jun 25 18:25:50.911506 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:25:50.911514 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:25:50.911521 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:25:50.911528 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:25:50.911535 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:25:50.911542 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:25:50.911551 kernel: GICv3: 256 SPIs implemented Jun 25 18:25:50.911558 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:25:50.911565 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:25:50.911572 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 18:25:50.911579 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 25 18:25:50.911585 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 25 18:25:50.911592 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 18:25:50.911599 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jun 25 18:25:50.911606 kernel: GICv3: using LPI property table @0x00000000400f0000 Jun 25 18:25:50.911613 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jun 25 18:25:50.911619 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:25:50.911627 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:25:50.911634 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 18:25:50.911641 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 18:25:50.911648 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 18:25:50.911655 kernel: arm-pv: using stolen time PV Jun 25 18:25:50.911662 kernel: Console: colour dummy device 80x25 Jun 25 18:25:50.911669 kernel: ACPI: Core revision 20230628 Jun 25 18:25:50.911676 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 18:25:50.911683 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:25:50.911690 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:25:50.911698 kernel: SELinux: Initializing. Jun 25 18:25:50.911705 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:25:50.911712 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:25:50.911719 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:25:50.911726 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:25:50.911733 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:25:50.911740 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:25:50.911746 kernel: Platform MSI: ITS@0x8080000 domain created Jun 25 18:25:50.911753 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 25 18:25:50.911761 kernel: Remapping and enabling EFI services. Jun 25 18:25:50.911768 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:25:50.911775 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:25:50.911782 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 25 18:25:50.911789 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jun 25 18:25:50.911796 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:25:50.911802 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 18:25:50.911809 kernel: Detected PIPT I-cache on CPU2 Jun 25 18:25:50.911816 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 25 18:25:50.911824 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jun 25 18:25:50.911832 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:25:50.911839 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 25 18:25:50.911851 kernel: Detected PIPT I-cache on CPU3 Jun 25 18:25:50.911859 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 25 18:25:50.911867 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jun 25 18:25:50.911874 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:25:50.911911 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 25 18:25:50.911920 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:25:50.911927 kernel: SMP: Total of 4 processors activated. Jun 25 18:25:50.911937 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:25:50.911944 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 18:25:50.911952 kernel: CPU features: detected: Common not Private translations Jun 25 18:25:50.911959 kernel: CPU features: detected: CRC32 instructions Jun 25 18:25:50.911966 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 25 18:25:50.911974 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 18:25:50.911981 kernel: CPU features: detected: LSE atomic instructions Jun 25 18:25:50.911988 kernel: CPU features: detected: Privileged Access Never Jun 25 18:25:50.911997 kernel: CPU features: detected: RAS Extension Support Jun 25 18:25:50.912005 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 25 18:25:50.912012 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:25:50.912019 kernel: alternatives: applying system-wide alternatives Jun 25 18:25:50.912027 kernel: devtmpfs: initialized Jun 25 18:25:50.912034 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:25:50.912042 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:25:50.912049 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:25:50.912056 kernel: SMBIOS 3.0.0 present. Jun 25 18:25:50.912065 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jun 25 18:25:50.912072 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:25:50.912079 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:25:50.912086 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:25:50.912094 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:25:50.912101 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:25:50.912108 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jun 25 18:25:50.912116 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:25:50.912123 kernel: cpuidle: using governor menu Jun 25 18:25:50.912132 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:25:50.912142 kernel: ASID allocator initialised with 32768 entries Jun 25 18:25:50.912149 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:25:50.912157 kernel: Serial: AMBA PL011 UART driver Jun 25 18:25:50.912164 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 25 18:25:50.912171 kernel: Modules: 0 pages in range for non-PLT usage Jun 25 18:25:50.912179 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:25:50.912186 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:25:50.912193 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:25:50.912202 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:25:50.912209 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:25:50.912217 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:25:50.912224 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:25:50.912231 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:25:50.912238 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:25:50.912246 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:25:50.912253 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:25:50.912260 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:25:50.912269 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:25:50.912276 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:25:50.912283 kernel: ACPI: Interpreter enabled Jun 25 18:25:50.912290 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:25:50.912298 kernel: ACPI: MCFG table detected, 1 entries Jun 25 18:25:50.912305 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 25 18:25:50.912312 kernel: printk: console [ttyAMA0] enabled Jun 25 18:25:50.912320 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:25:50.912454 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:25:50.912538 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 18:25:50.912604 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 18:25:50.912666 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 25 18:25:50.912728 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 25 18:25:50.912738 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 25 18:25:50.912745 kernel: PCI host bridge to bus 0000:00 Jun 25 18:25:50.912812 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 25 18:25:50.912876 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 18:25:50.912962 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 25 18:25:50.913023 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:25:50.913105 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 25 18:25:50.913181 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:25:50.913249 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jun 25 18:25:50.913321 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jun 25 18:25:50.913388 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 18:25:50.913454 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 18:25:50.913530 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jun 25 18:25:50.913599 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jun 25 18:25:50.913671 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 25 18:25:50.913730 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 18:25:50.913794 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 25 18:25:50.913804 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 18:25:50.913812 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 18:25:50.913819 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 18:25:50.913827 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 18:25:50.913835 kernel: iommu: Default domain type: Translated Jun 25 18:25:50.913842 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:25:50.913849 kernel: efivars: Registered efivars operations Jun 25 18:25:50.913857 kernel: vgaarb: loaded Jun 25 18:25:50.913866 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:25:50.913874 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:25:50.913905 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:25:50.913915 kernel: pnp: PnP ACPI init Jun 25 18:25:50.914002 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 25 18:25:50.914013 kernel: pnp: PnP ACPI: found 1 devices Jun 25 18:25:50.914021 kernel: NET: Registered PF_INET protocol family Jun 25 18:25:50.914028 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:25:50.914039 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:25:50.914046 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:25:50.914054 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:25:50.914062 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:25:50.914069 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:25:50.914077 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:25:50.914085 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:25:50.914093 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:25:50.914100 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:25:50.914109 kernel: kvm [1]: HYP mode not available Jun 25 18:25:50.914117 kernel: Initialise system trusted keyrings Jun 25 18:25:50.914124 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:25:50.914132 kernel: Key type asymmetric registered Jun 25 18:25:50.914144 kernel: Asymmetric key parser 'x509' registered Jun 25 18:25:50.914152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:25:50.914159 kernel: io scheduler mq-deadline registered Jun 25 18:25:50.914167 kernel: io scheduler kyber registered Jun 25 18:25:50.914174 kernel: io scheduler bfq registered Jun 25 18:25:50.914183 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 18:25:50.914190 kernel: ACPI: button: Power Button [PWRB] Jun 25 18:25:50.914198 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 18:25:50.914269 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 25 18:25:50.914279 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:25:50.914287 kernel: thunder_xcv, ver 1.0 Jun 25 18:25:50.914294 kernel: thunder_bgx, ver 1.0 Jun 25 18:25:50.914302 kernel: nicpf, ver 1.0 Jun 25 18:25:50.914309 kernel: nicvf, ver 1.0 Jun 25 18:25:50.914385 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:25:50.914449 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:25:50 UTC (1719339950) Jun 25 18:25:50.914459 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:25:50.914467 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 25 18:25:50.914475 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:25:50.914482 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:25:50.914496 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:25:50.914504 kernel: Segment Routing with IPv6 Jun 25 18:25:50.914514 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:25:50.914521 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:25:50.914529 kernel: Key type dns_resolver registered Jun 25 18:25:50.914536 kernel: registered taskstats version 1 Jun 25 18:25:50.914543 kernel: Loading compiled-in X.509 certificates Jun 25 18:25:50.914551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:25:50.914558 kernel: Key type .fscrypt registered Jun 25 18:25:50.914565 kernel: Key type fscrypt-provisioning registered Jun 25 18:25:50.914572 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:25:50.914581 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:25:50.914588 kernel: ima: No architecture policies found Jun 25 18:25:50.914596 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:25:50.914603 kernel: clk: Disabling unused clocks Jun 25 18:25:50.914610 kernel: Freeing unused kernel memory: 39040K Jun 25 18:25:50.914617 kernel: Run /init as init process Jun 25 18:25:50.914624 kernel: with arguments: Jun 25 18:25:50.914631 kernel: /init Jun 25 18:25:50.914638 kernel: with environment: Jun 25 18:25:50.914647 kernel: HOME=/ Jun 25 18:25:50.914654 kernel: TERM=linux Jun 25 18:25:50.914661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:25:50.914670 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:25:50.914679 systemd[1]: Detected virtualization kvm. Jun 25 18:25:50.914687 systemd[1]: Detected architecture arm64. Jun 25 18:25:50.914694 systemd[1]: Running in initrd. Jun 25 18:25:50.914702 systemd[1]: No hostname configured, using default hostname. Jun 25 18:25:50.914711 systemd[1]: Hostname set to <localhost>. Jun 25 18:25:50.914719 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:25:50.914726 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:25:50.914734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:25:50.914742 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:25:50.914750 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:25:50.914758 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:25:50.914767 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:25:50.914775 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:25:50.914785 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:25:50.914793 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:25:50.914801 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:25:50.914808 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:25:50.914816 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:25:50.914825 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:25:50.914833 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:25:50.914841 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:25:50.914849 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:25:50.914856 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:25:50.914864 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:25:50.914872 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:25:50.914891 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:25:50.914899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:25:50.914909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:25:50.914917 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:25:50.914925 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:25:50.914932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:25:50.914940 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:25:50.914948 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:25:50.914956 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:25:50.914966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:25:50.914974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:25:50.914983 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:25:50.914991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:25:50.914999 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:25:50.915007 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:25:50.915017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:25:50.915042 systemd-journald[238]: Collecting audit messages is disabled. Jun 25 18:25:50.915062 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:25:50.915070 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:25:50.915080 systemd-journald[238]: Journal started Jun 25 18:25:50.915099 systemd-journald[238]: Runtime Journal (/run/log/journal/e4103f9301c24fa39e3ef2e76c3e317f) is 5.9M, max 47.3M, 41.4M free. Jun 25 18:25:50.906372 systemd-modules-load[239]: Inserted module 'overlay' Jun 25 18:25:50.917921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:25:50.918914 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:25:50.918949 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:25:50.922053 systemd-modules-load[239]: Inserted module 'br_netfilter' Jun 25 18:25:50.923459 kernel: Bridge firewalling registered Jun 25 18:25:50.922842 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:25:50.932064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:25:50.933610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:25:50.934905 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:25:50.936915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:25:50.940015 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:25:50.941005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:25:50.946709 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:25:50.949482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:25:50.954966 dracut-cmdline[276]: dracut-dracut-053 Jun 25 18:25:50.959042 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:25:50.984361 systemd-resolved[283]: Positive Trust Anchors: Jun 25 18:25:50.984379 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:25:50.984414 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:25:50.989103 systemd-resolved[283]: Defaulting to hostname 'linux'. Jun 25 18:25:50.992240 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:25:50.993294 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:25:51.027913 kernel: SCSI subsystem initialized Jun 25 18:25:51.033900 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:25:51.041938 kernel: iscsi: registered transport (tcp) Jun 25 18:25:51.054131 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:25:51.054152 kernel: QLogic iSCSI HBA Driver Jun 25 18:25:51.095656 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:25:51.105046 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:25:51.121563 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:25:51.124678 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:25:51.124697 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:25:51.170929 kernel: raid6: neonx8 gen() 15744 MB/s Jun 25 18:25:51.187904 kernel: raid6: neonx4 gen() 15641 MB/s Jun 25 18:25:51.204913 kernel: raid6: neonx2 gen() 13264 MB/s Jun 25 18:25:51.221906 kernel: raid6: neonx1 gen() 10470 MB/s Jun 25 18:25:51.238912 kernel: raid6: int64x8 gen() 6956 MB/s Jun 25 18:25:51.255900 kernel: raid6: int64x4 gen() 7324 MB/s Jun 25 18:25:51.272895 kernel: raid6: int64x2 gen() 6120 MB/s Jun 25 18:25:51.289906 kernel: raid6: int64x1 gen() 5036 MB/s Jun 25 18:25:51.289954 kernel: raid6: using algorithm neonx8 gen() 15744 MB/s Jun 25 18:25:51.306899 kernel: raid6: .... xor() 11895 MB/s, rmw enabled Jun 25 18:25:51.306915 kernel: raid6: using neon recovery algorithm Jun 25 18:25:51.312274 kernel: xor: measuring software checksum speed Jun 25 18:25:51.312304 kernel: 8regs : 19835 MB/sec Jun 25 18:25:51.313151 kernel: 32regs : 19701 MB/sec Jun 25 18:25:51.314340 kernel: arm64_neon : 27107 MB/sec Jun 25 18:25:51.314354 kernel: xor: using function: arm64_neon (27107 MB/sec) Jun 25 18:25:51.365907 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:25:51.377918 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:25:51.391061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:25:51.402631 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jun 25 18:25:51.405734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:25:51.408120 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:25:51.422159 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jun 25 18:25:51.447323 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:25:51.459023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:25:51.498142 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:25:51.510301 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:25:51.523161 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:25:51.525891 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:25:51.527364 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:25:51.529497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:25:51.537027 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:25:51.543731 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 25 18:25:51.554326 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:25:51.554438 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:25:51.554450 kernel: GPT:9289727 != 19775487 Jun 25 18:25:51.554465 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:25:51.554475 kernel: GPT:9289727 != 19775487 Jun 25 18:25:51.554493 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:25:51.554507 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:25:51.545835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:25:51.568894 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (526) Jun 25 18:25:51.568935 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (508) Jun 25 18:25:51.570195 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:25:51.584244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:25:51.589649 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:25:51.593444 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:25:51.594558 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:25:51.608029 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:25:51.608931 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:25:51.608995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:25:51.611794 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:25:51.612973 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:25:51.618073 disk-uuid[547]: Primary Header is updated. Jun 25 18:25:51.618073 disk-uuid[547]: Secondary Entries is updated. Jun 25 18:25:51.618073 disk-uuid[547]: Secondary Header is updated. Jun 25 18:25:51.613035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:25:51.615124 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:25:51.617912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:25:51.624909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:25:51.630896 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:25:51.632832 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:25:51.634969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:25:51.642067 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:25:51.666889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:25:52.633870 disk-uuid[548]: The operation has completed successfully. Jun 25 18:25:52.635095 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:25:52.659996 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:25:52.661149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:25:52.678023 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:25:52.681277 sh[575]: Success Jun 25 18:25:52.705240 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:25:52.738683 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:25:52.750208 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:25:52.751707 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:25:52.763719 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:25:52.763762 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:25:52.763774 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:25:52.763791 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:25:52.763801 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:25:52.767249 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:25:52.768475 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:25:52.781005 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:25:52.782570 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:25:52.789596 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:25:52.789634 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:25:52.789644 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:25:52.793924 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:25:52.800835 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:25:52.802922 kernel: BTRFS info (device vda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:25:52.809359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:25:52.817037 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:25:52.875043 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:25:52.887048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:25:52.909337 systemd-networkd[761]: lo: Link UP Jun 25 18:25:52.909349 systemd-networkd[761]: lo: Gained carrier Jun 25 18:25:52.910089 systemd-networkd[761]: Enumeration completed Jun 25 18:25:52.910691 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:25:52.910694 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:25:52.911697 systemd-networkd[761]: eth0: Link UP Jun 25 18:25:52.911700 systemd-networkd[761]: eth0: Gained carrier Jun 25 18:25:52.911706 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:25:52.911909 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:25:52.916394 systemd[1]: Reached target network.target - Network. Jun 25 18:25:52.924108 ignition[674]: Ignition 2.19.0 Jun 25 18:25:52.924116 ignition[674]: Stage: fetch-offline Jun 25 18:25:52.924153 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:25:52.924161 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:25:52.924375 ignition[674]: parsed url from cmdline: "" Jun 25 18:25:52.924379 ignition[674]: no config URL provided Jun 25 18:25:52.924385 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:25:52.924393 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:25:52.924416 ignition[674]: op(1): [started] loading QEMU firmware config module Jun 25 18:25:52.924421 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:25:52.936946 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:25:52.938516 ignition[674]: op(1): [finished] loading QEMU firmware config module Jun 25 18:25:52.975631 ignition[674]: parsing config with SHA512: 70c924a0f23cca268975d603fb374d090583ef9c0e1454ab4db142437ab44baebf283d4d1de266c56338304c1a24bfab20caba20c39b86adf84499551f961ca3 Jun 25 18:25:52.980755 unknown[674]: fetched base config from "system" Jun 25 18:25:52.980769 unknown[674]: fetched user config from "qemu" Jun 25 18:25:52.981672 ignition[674]: fetch-offline: fetch-offline passed Jun 25 18:25:52.982458 systemd-resolved[283]: Detected conflict on linux IN A 10.0.0.53 Jun 25 18:25:52.981760 ignition[674]: Ignition finished successfully Jun 25 18:25:52.982466 systemd-resolved[283]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jun 25 18:25:52.983219 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:25:52.985039 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:25:52.993027 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:25:53.005616 ignition[773]: Ignition 2.19.0 Jun 25 18:25:53.005625 ignition[773]: Stage: kargs Jun 25 18:25:53.005771 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:25:53.005780 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:25:53.006637 ignition[773]: kargs: kargs passed Jun 25 18:25:53.011111 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:25:53.006680 ignition[773]: Ignition finished successfully Jun 25 18:25:53.022057 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:25:53.035053 ignition[782]: Ignition 2.19.0 Jun 25 18:25:53.035062 ignition[782]: Stage: disks Jun 25 18:25:53.035216 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:25:53.035225 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:25:53.036084 ignition[782]: disks: disks passed Jun 25 18:25:53.037523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:25:53.036128 ignition[782]: Ignition finished successfully Jun 25 18:25:53.039323 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:25:53.040216 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:25:53.041057 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:25:53.041873 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:25:53.042767 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:25:53.060067 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:25:53.071438 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:25:53.076149 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:25:53.088009 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:25:53.138935 kernel: EXT4-fs (vda9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:25:53.138783 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:25:53.139979 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:25:53.151003 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:25:53.152496 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:25:53.153239 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:25:53.153275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:25:53.153296 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:25:53.159956 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:25:53.162068 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:25:53.167428 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jun 25 18:25:53.172217 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:25:53.172253 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:25:53.172264 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:25:53.180421 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:25:53.183004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:25:53.228512 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:25:53.232925 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:25:53.236916 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:25:53.240861 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:25:53.323649 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:25:53.333107 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:25:53.335542 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:25:53.341897 kernel: BTRFS info (device vda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:25:53.364048 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:25:53.365997 ignition[913]: INFO : Ignition 2.19.0 Jun 25 18:25:53.365997 ignition[913]: INFO : Stage: mount Jun 25 18:25:53.365997 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:25:53.365997 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:25:53.365997 ignition[913]: INFO : mount: mount passed Jun 25 18:25:53.365997 ignition[913]: INFO : Ignition finished successfully Jun 25 18:25:53.366819 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:25:53.373005 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:25:53.761812 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:25:53.772603 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:25:53.780904 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Jun 25 18:25:53.783380 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:25:53.783416 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:25:53.783426 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:25:53.786900 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:25:53.787694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:25:53.803769 ignition[945]: INFO : Ignition 2.19.0 Jun 25 18:25:53.803769 ignition[945]: INFO : Stage: files Jun 25 18:25:53.805290 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:25:53.805290 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:25:53.805290 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:25:53.808646 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:25:53.808646 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:25:53.811453 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:25:53.811453 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:25:53.811453 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:25:53.811453 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:25:53.811453 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:25:53.809695 unknown[945]: wrote ssh authorized keys file for user: core Jun 25 18:25:53.849558 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:25:53.893654 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:25:53.893654 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:25:53.897436 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 25 18:25:54.261000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:25:54.333184 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:25:54.333184 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jun 25 18:25:54.336776 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jun 25 18:25:54.559984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:25:54.773319 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jun 25 18:25:54.773319 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 18:25:54.776923 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:25:54.776923 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:25:54.776923 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 18:25:54.776923 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 25 18:25:54.776923 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:25:54.776923 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:25:54.776923 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 25 18:25:54.776923 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:25:54.805664 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:25:54.809259 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:25:54.811592 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:25:54.811592 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:25:54.811592 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:25:54.811592 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:25:54.811592 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:25:54.811592 ignition[945]: INFO : files: files passed Jun 25 18:25:54.811592 ignition[945]: INFO : Ignition finished successfully Jun 25 18:25:54.812597 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:25:54.822071 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:25:54.823703 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:25:54.828097 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:25:54.828177 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:25:54.834358 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:25:54.837633 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:25:54.837633 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:25:54.840685 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:25:54.841690 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:25:54.843436 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:25:54.844941 systemd-networkd[761]: eth0: Gained IPv6LL Jun 25 18:25:54.850080 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:25:54.870254 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:25:54.870359 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:25:54.872379 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:25:54.874059 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:25:54.875687 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:25:54.876542 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:25:54.893945 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:25:54.902049 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:25:54.911223 systemd[1]: Stopped target network.target - Network. Jun 25 18:25:54.912289 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:25:54.913989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:25:54.916072 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:25:54.917736 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:25:54.917861 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:25:54.920264 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:25:54.923206 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:25:54.924773 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:25:54.926416 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:25:54.928268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:25:54.930111 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:25:54.932923 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:25:54.934155 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:25:54.936007 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:25:54.937662 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:25:54.939072 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:25:54.939191 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:25:54.941396 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:25:54.943279 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:25:54.945103 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:25:54.945994 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:25:54.947177 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:25:54.947295 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:25:54.949912 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:25:54.950024 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:25:54.952064 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:25:54.953550 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:25:54.953664 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:25:54.955471 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:25:54.957151 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:25:54.958586 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:25:54.958671 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:25:54.960293 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:25:54.960374 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:25:54.962395 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:25:54.962511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:25:54.964106 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:25:54.964197 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:25:54.977056 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:25:54.977929 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:25:54.978062 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:25:54.981323 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:25:54.982849 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:25:54.984511 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:25:54.986744 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:25:54.986994 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:25:54.989673 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:25:54.994834 ignition[1001]: INFO : Ignition 2.19.0 Jun 25 18:25:54.994834 ignition[1001]: INFO : Stage: umount Jun 25 18:25:54.994834 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:25:54.994834 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:25:54.994834 ignition[1001]: INFO : umount: umount passed Jun 25 18:25:54.994834 ignition[1001]: INFO : Ignition finished successfully Jun 25 18:25:54.989840 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:25:54.992945 systemd-networkd[761]: eth0: DHCPv6 lease lost Jun 25 18:25:54.996011 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:25:54.996109 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:25:54.999929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:25:55.000712 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:25:55.000819 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:25:55.004129 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:25:55.004237 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:25:55.007125 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:25:55.007238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:25:55.012601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:25:55.012693 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:25:55.015046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:25:55.015107 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:25:55.016359 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:25:55.016409 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:25:55.018025 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:25:55.018070 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:25:55.019469 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:25:55.019525 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:25:55.020975 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:25:55.021019 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:25:55.022814 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:25:55.022858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:25:55.031014 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:25:55.032265 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:25:55.032323 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:25:55.034150 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:25:55.034192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:25:55.035832 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:25:55.035873 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:25:55.037374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:25:55.037411 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:25:55.039184 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:25:55.060986 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:25:55.062079 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:25:55.064876 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:25:55.065031 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:25:55.068358 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:25:55.068431 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:25:55.069923 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:25:55.069958 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:25:55.071599 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:25:55.071649 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:25:55.074306 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:25:55.074354 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:25:55.076765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:25:55.076815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:25:55.093583 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:25:55.094601 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:25:55.094893 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:25:55.096689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:25:55.096737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:25:55.102183 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:25:55.103251 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:25:55.104509 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:25:55.108038 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:25:55.117619 systemd[1]: Switching root. Jun 25 18:25:55.144387 systemd-journald[238]: Journal stopped Jun 25 18:25:55.901440 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jun 25 18:25:55.901501 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:25:55.901515 kernel: SELinux: policy capability open_perms=1 Jun 25 18:25:55.901525 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:25:55.901534 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:25:55.901547 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:25:55.901557 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:25:55.901570 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:25:55.901579 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:25:55.901589 systemd[1]: Successfully loaded SELinux policy in 35.733ms. Jun 25 18:25:55.901605 kernel: audit: type=1403 audit(1719339955.321:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:25:55.901616 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.209ms. Jun 25 18:25:55.901629 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:25:55.901640 systemd[1]: Detected virtualization kvm. Jun 25 18:25:55.901652 systemd[1]: Detected architecture arm64. Jun 25 18:25:55.901662 systemd[1]: Detected first boot. Jun 25 18:25:55.901672 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:25:55.901683 zram_generator::config[1045]: No configuration found. Jun 25 18:25:55.901694 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:25:55.901705 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:25:55.901715 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:25:55.901725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:25:55.901738 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:25:55.901748 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:25:55.901759 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:25:55.901769 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:25:55.901779 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:25:55.901790 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:25:55.901801 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:25:55.901811 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:25:55.901822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:25:55.901834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:25:55.901845 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:25:55.901856 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:25:55.901867 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:25:55.901976 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:25:55.901993 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 25 18:25:55.902004 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:25:55.902015 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:25:55.902025 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:25:55.902040 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:25:55.902050 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:25:55.902061 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:25:55.902071 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:25:55.902082 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:25:55.902093 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:25:55.902103 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:25:55.902113 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:25:55.902126 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:25:55.902136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:25:55.902147 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:25:55.902158 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:25:55.902168 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:25:55.902178 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:25:55.902191 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:25:55.902201 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:25:55.902211 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:25:55.902223 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:25:55.902234 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:25:55.902245 systemd[1]: Reached target machines.target - Containers. Jun 25 18:25:55.902256 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:25:55.902266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:25:55.902277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:25:55.902287 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:25:55.902298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:25:55.902309 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:25:55.902320 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:25:55.902330 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:25:55.902340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:25:55.902351 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:25:55.902362 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:25:55.902372 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:25:55.902382 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:25:55.902393 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:25:55.902404 kernel: fuse: init (API version 7.39) Jun 25 18:25:55.902418 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:25:55.902431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:25:55.902441 kernel: ACPI: bus type drm_connector registered Jun 25 18:25:55.902451 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:25:55.902461 kernel: loop: module loaded Jun 25 18:25:55.902476 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:25:55.902506 systemd-journald[1118]: Collecting audit messages is disabled. Jun 25 18:25:55.902529 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:25:55.902540 systemd-journald[1118]: Journal started Jun 25 18:25:55.902560 systemd-journald[1118]: Runtime Journal (/run/log/journal/e4103f9301c24fa39e3ef2e76c3e317f) is 5.9M, max 47.3M, 41.4M free. Jun 25 18:25:55.716914 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:25:55.734839 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:25:55.735190 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:25:55.905113 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:25:55.905140 systemd[1]: Stopped verity-setup.service. Jun 25 18:25:55.908310 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:25:55.908851 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:25:55.909961 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:25:55.911087 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:25:55.912105 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:25:55.913251 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:25:55.914438 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:25:55.915636 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:25:55.918052 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:25:55.919439 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:25:55.919596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:25:55.920970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:25:55.921113 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:25:55.922405 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:25:55.922557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:25:55.923802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:25:55.923982 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:25:55.925318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:25:55.925452 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:25:55.926714 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:25:55.926843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:25:55.928312 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:25:55.929576 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:25:55.931149 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:25:55.943236 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:25:55.952005 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:25:55.954090 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:25:55.955181 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:25:55.955222 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:25:55.957205 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:25:55.959390 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:25:55.961483 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:25:55.962639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:25:55.964027 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:25:55.966230 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:25:55.967442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:25:55.970077 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:25:55.971253 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:25:55.973097 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:25:55.976174 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:25:55.979056 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:25:55.981417 systemd-journald[1118]: Time spent on flushing to /var/log/journal/e4103f9301c24fa39e3ef2e76c3e317f is 21.084ms for 861 entries. Jun 25 18:25:55.981417 systemd-journald[1118]: System Journal (/var/log/journal/e4103f9301c24fa39e3ef2e76c3e317f) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:25:56.015398 systemd-journald[1118]: Received client request to flush runtime journal. Jun 25 18:25:56.015439 kernel: loop0: detected capacity change from 0 to 59688 Jun 25 18:25:56.015479 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:25:55.985164 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:25:55.987357 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:25:55.989243 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:25:55.992908 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:25:55.994394 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:25:56.003373 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:25:56.011333 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:25:56.015843 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:25:56.017799 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:25:56.021946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:25:56.028813 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:25:56.033336 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:25:56.034412 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:25:56.041346 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:25:56.052274 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:25:56.053889 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:25:56.065203 kernel: loop1: detected capacity change from 0 to 194512 Jun 25 18:25:56.072857 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jun 25 18:25:56.072876 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jun 25 18:25:56.077517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:25:56.101939 kernel: loop2: detected capacity change from 0 to 113712 Jun 25 18:25:56.133920 kernel: loop3: detected capacity change from 0 to 59688 Jun 25 18:25:56.138905 kernel: loop4: detected capacity change from 0 to 194512 Jun 25 18:25:56.143903 kernel: loop5: detected capacity change from 0 to 113712 Jun 25 18:25:56.146241 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:25:56.146616 (sd-merge)[1180]: Merged extensions into '/usr'. Jun 25 18:25:56.151777 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:25:56.151792 systemd[1]: Reloading... Jun 25 18:25:56.193925 zram_generator::config[1204]: No configuration found. Jun 25 18:25:56.269214 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:25:56.288651 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:25:56.325445 systemd[1]: Reloading finished in 173 ms. Jun 25 18:25:56.356916 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:25:56.358335 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:25:56.376083 systemd[1]: Starting ensure-sysext.service... Jun 25 18:25:56.378385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:25:56.384494 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:25:56.384508 systemd[1]: Reloading... Jun 25 18:25:56.403626 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:25:56.403904 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:25:56.404551 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:25:56.404767 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jun 25 18:25:56.404809 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jun 25 18:25:56.407020 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:25:56.407035 systemd-tmpfiles[1239]: Skipping /boot Jun 25 18:25:56.416463 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:25:56.416483 systemd-tmpfiles[1239]: Skipping /boot Jun 25 18:25:56.433177 zram_generator::config[1264]: No configuration found. Jun 25 18:25:56.528321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:25:56.564874 systemd[1]: Reloading finished in 180 ms. Jun 25 18:25:56.581911 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:25:56.593295 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:25:56.599050 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:25:56.602489 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:25:56.605375 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:25:56.611095 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:25:56.615132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:25:56.618118 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:25:56.626850 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:25:56.634392 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:25:56.635656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:25:56.639350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:25:56.645608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:25:56.647846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:25:56.648820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:25:56.651526 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:25:56.655848 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:25:56.657194 systemd-udevd[1306]: Using default interface naming scheme 'v255'. Jun 25 18:25:56.659791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:25:56.661044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:25:56.663145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:25:56.664973 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:25:56.665095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:25:56.670768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:25:56.682132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:25:56.687151 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:25:56.700418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:25:56.701599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:25:56.707179 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:25:56.710704 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:25:56.712392 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:25:56.714963 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:25:56.716532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:25:56.717914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:25:56.719842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:25:56.720521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:25:56.722427 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:25:56.723194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:25:56.735914 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1351) Jun 25 18:25:56.738913 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1338) Jun 25 18:25:56.740422 systemd[1]: Finished ensure-sysext.service. Jun 25 18:25:56.744445 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 18:25:56.745445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:25:56.754077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:25:56.761062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:25:56.765504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:25:56.770092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:25:56.774134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:25:56.776582 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:25:56.779067 augenrules[1364]: No rules Jun 25 18:25:56.784068 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:25:56.785064 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:25:56.785645 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:25:56.787057 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:25:56.788384 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:25:56.788540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:25:56.789905 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:25:56.790030 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:25:56.791265 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:25:56.791400 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:25:56.795196 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:25:56.795332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:25:56.818664 systemd-resolved[1305]: Positive Trust Anchors: Jun 25 18:25:56.818684 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:25:56.818715 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:25:56.818891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:25:56.825287 systemd-resolved[1305]: Defaulting to hostname 'linux'. Jun 25 18:25:56.830080 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:25:56.831777 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:25:56.831833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:25:56.831986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:25:56.833710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:25:56.856965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:25:56.860202 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:25:56.862071 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:25:56.863937 systemd-networkd[1376]: lo: Link UP Jun 25 18:25:56.863950 systemd-networkd[1376]: lo: Gained carrier Jun 25 18:25:56.864653 systemd-networkd[1376]: Enumeration completed Jun 25 18:25:56.870405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:25:56.870969 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:25:56.870980 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:25:56.871559 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:25:56.873108 systemd[1]: Reached target network.target - Network. Jun 25 18:25:56.873726 systemd-networkd[1376]: eth0: Link UP Jun 25 18:25:56.873736 systemd-networkd[1376]: eth0: Gained carrier Jun 25 18:25:56.873750 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:25:56.875353 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:25:56.884275 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:25:56.887551 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:25:56.893067 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:25:56.893832 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Jun 25 18:25:56.895052 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:25:56.895112 systemd-timesyncd[1379]: Initial clock synchronization to Tue 2024-06-25 18:25:56.565877 UTC. Jun 25 18:25:56.899246 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:25:56.926918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:25:56.942317 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:25:56.943719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:25:56.944769 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:25:56.945865 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:25:56.947108 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:25:56.948383 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:25:56.949486 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:25:56.950636 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:25:56.951784 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:25:56.951816 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:25:56.952546 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:25:56.954523 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:25:56.956728 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:25:56.961702 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:25:56.963856 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:25:56.965335 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:25:56.966401 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:25:56.967335 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:25:56.968053 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:25:56.968084 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:25:56.969059 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:25:56.970849 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:25:56.974030 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:25:56.975026 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:25:56.979722 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:25:56.980659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:25:56.984092 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:25:56.986000 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:25:56.987778 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:25:56.992564 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:25:56.996553 jq[1408]: false Jun 25 18:25:56.998098 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:25:57.001924 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:25:57.002353 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:25:57.003044 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:25:57.006357 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:25:57.006793 extend-filesystems[1409]: Found loop3 Jun 25 18:25:57.009390 extend-filesystems[1409]: Found loop4 Jun 25 18:25:57.009390 extend-filesystems[1409]: Found loop5 Jun 25 18:25:57.009390 extend-filesystems[1409]: Found vda Jun 25 18:25:57.009005 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:25:57.014275 extend-filesystems[1409]: Found vda1 Jun 25 18:25:57.014275 extend-filesystems[1409]: Found vda2 Jun 25 18:25:57.014275 extend-filesystems[1409]: Found vda3 Jun 25 18:25:57.014275 extend-filesystems[1409]: Found usr Jun 25 18:25:57.014275 extend-filesystems[1409]: Found vda4 Jun 25 18:25:57.014275 extend-filesystems[1409]: Found vda6 Jun 25 18:25:57.014275 extend-filesystems[1409]: Found vda7 Jun 25 18:25:57.014275 extend-filesystems[1409]: Found vda9 Jun 25 18:25:57.014275 extend-filesystems[1409]: Checking size of /dev/vda9 Jun 25 18:25:57.011516 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:25:57.011654 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:25:57.030708 jq[1421]: true Jun 25 18:25:57.012711 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:25:57.012835 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:25:57.024190 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:25:57.024762 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:25:57.038094 dbus-daemon[1407]: [system] SELinux support is enabled Jun 25 18:25:57.038332 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:25:57.040121 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:25:57.044904 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:25:57.044960 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:25:57.045059 tar[1423]: linux-arm64/helm Jun 25 18:25:57.048050 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:25:57.048077 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:25:57.048457 extend-filesystems[1409]: Resized partition /dev/vda9 Jun 25 18:25:57.063971 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1331) Jun 25 18:25:57.064363 jq[1435]: true Jun 25 18:25:57.077488 extend-filesystems[1443]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:25:57.079980 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:25:57.089520 update_engine[1418]: I0625 18:25:57.089310 1418 main.cc:92] Flatcar Update Engine starting Jun 25 18:25:57.099181 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:25:57.099412 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 18:25:57.100366 systemd-logind[1415]: New seat seat0. Jun 25 18:25:57.103099 update_engine[1418]: I0625 18:25:57.103041 1418 update_check_scheduler.cc:74] Next update check in 4m19s Jun 25 18:25:57.107086 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:25:57.109093 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:25:57.114542 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:25:57.136785 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:25:57.136785 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:25:57.136785 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:25:57.141985 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Jun 25 18:25:57.138213 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:25:57.139922 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:25:57.149379 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:25:57.150924 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:25:57.155123 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:25:57.199935 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:25:57.281573 containerd[1436]: time="2024-06-25T18:25:57.281424353Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:25:57.305658 containerd[1436]: time="2024-06-25T18:25:57.305455864Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:25:57.305658 containerd[1436]: time="2024-06-25T18:25:57.305501122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:25:57.306897 containerd[1436]: time="2024-06-25T18:25:57.306792484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:25:57.306897 containerd[1436]: time="2024-06-25T18:25:57.306821557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307045 containerd[1436]: time="2024-06-25T18:25:57.307020656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307069 containerd[1436]: time="2024-06-25T18:25:57.307045663Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:25:57.307134 containerd[1436]: time="2024-06-25T18:25:57.307119457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307197 containerd[1436]: time="2024-06-25T18:25:57.307170622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307197 containerd[1436]: time="2024-06-25T18:25:57.307182704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307250 containerd[1436]: time="2024-06-25T18:25:57.307235135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307444 containerd[1436]: time="2024-06-25T18:25:57.307422919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307473 containerd[1436]: time="2024-06-25T18:25:57.307447351Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:25:57.307473 containerd[1436]: time="2024-06-25T18:25:57.307456786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307568 containerd[1436]: time="2024-06-25T18:25:57.307541857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:25:57.307568 containerd[1436]: time="2024-06-25T18:25:57.307558196Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:25:57.307623 containerd[1436]: time="2024-06-25T18:25:57.307609016Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:25:57.307642 containerd[1436]: time="2024-06-25T18:25:57.307622977Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:25:57.310569 containerd[1436]: time="2024-06-25T18:25:57.310543566Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:25:57.310633 containerd[1436]: time="2024-06-25T18:25:57.310574327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:25:57.310633 containerd[1436]: time="2024-06-25T18:25:57.310587866Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:25:57.310633 containerd[1436]: time="2024-06-25T18:25:57.310616095Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:25:57.310633 containerd[1436]: time="2024-06-25T18:25:57.310630286Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:25:57.310821 containerd[1436]: time="2024-06-25T18:25:57.310640987Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:25:57.310821 containerd[1436]: time="2024-06-25T18:25:57.310652148Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:25:57.310821 containerd[1436]: time="2024-06-25T18:25:57.310770204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:25:57.310821 containerd[1436]: time="2024-06-25T18:25:57.310784663Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:25:57.310821 containerd[1436]: time="2024-06-25T18:25:57.310795901Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:25:57.310821 containerd[1436]: time="2024-06-25T18:25:57.310807254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:25:57.310821 containerd[1436]: time="2024-06-25T18:25:57.310819489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:25:57.311013 containerd[1436]: time="2024-06-25T18:25:57.310834256Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:25:57.311013 containerd[1436]: time="2024-06-25T18:25:57.310855044Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:25:57.311013 containerd[1436]: time="2024-06-25T18:25:57.310866167Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:25:57.311013 containerd[1436]: time="2024-06-25T18:25:57.310900648Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:25:57.311013 containerd[1436]: time="2024-06-25T18:25:57.310913266Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:25:57.311013 containerd[1436]: time="2024-06-25T18:25:57.310925041Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:25:57.311013 containerd[1436]: time="2024-06-25T18:25:57.310935666Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:25:57.311128 containerd[1436]: time="2024-06-25T18:25:57.311024572Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312397744Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312513038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312528034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312555074Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312669947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312682795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312694494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312705118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312716624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312728207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312730 containerd[1436]: time="2024-06-25T18:25:57.312738870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312999 containerd[1436]: time="2024-06-25T18:25:57.312750146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312999 containerd[1436]: time="2024-06-25T18:25:57.312762036Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:25:57.312999 containerd[1436]: time="2024-06-25T18:25:57.312932331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312999 containerd[1436]: time="2024-06-25T18:25:57.312949398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312999 containerd[1436]: time="2024-06-25T18:25:57.312960790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312999 containerd[1436]: time="2024-06-25T18:25:57.312972641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312999 containerd[1436]: time="2024-06-25T18:25:57.312983726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.312999 containerd[1436]: time="2024-06-25T18:25:57.312997150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.313138 containerd[1436]: time="2024-06-25T18:25:57.313009155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.313138 containerd[1436]: time="2024-06-25T18:25:57.313019165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:25:57.313378 containerd[1436]: time="2024-06-25T18:25:57.313307017Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:25:57.313378 containerd[1436]: time="2024-06-25T18:25:57.313368960Z" level=info msg="Connect containerd service" Jun 25 18:25:57.313570 containerd[1436]: time="2024-06-25T18:25:57.313394926Z" level=info msg="using legacy CRI server" Jun 25 18:25:57.313570 containerd[1436]: time="2024-06-25T18:25:57.313401945Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:25:57.313570 containerd[1436]: time="2024-06-25T18:25:57.313526482Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:25:57.314179 containerd[1436]: time="2024-06-25T18:25:57.314150167Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:25:57.314236 containerd[1436]: time="2024-06-25T18:25:57.314205167Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:25:57.314236 containerd[1436]: time="2024-06-25T18:25:57.314223232Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:25:57.314236 containerd[1436]: time="2024-06-25T18:25:57.314232897Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:25:57.314300 containerd[1436]: time="2024-06-25T18:25:57.314243867Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:25:57.314513 containerd[1436]: time="2024-06-25T18:25:57.314377303Z" level=info msg="Start subscribing containerd event" Jun 25 18:25:57.314513 containerd[1436]: time="2024-06-25T18:25:57.314482394Z" level=info msg="Start recovering state" Jun 25 18:25:57.314830 containerd[1436]: time="2024-06-25T18:25:57.314815159Z" level=info msg="Start event monitor" Jun 25 18:25:57.314871 containerd[1436]: time="2024-06-25T18:25:57.314833876Z" level=info msg="Start snapshots syncer" Jun 25 18:25:57.314871 containerd[1436]: time="2024-06-25T18:25:57.314849448Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:25:57.314871 containerd[1436]: time="2024-06-25T18:25:57.314856237Z" level=info msg="Start streaming server" Jun 25 18:25:57.315123 containerd[1436]: time="2024-06-25T18:25:57.315099942Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:25:57.315164 containerd[1436]: time="2024-06-25T18:25:57.315148346Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:25:57.317086 containerd[1436]: time="2024-06-25T18:25:57.316733887Z" level=info msg="containerd successfully booted in 0.037460s" Jun 25 18:25:57.315282 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:25:57.364380 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:25:57.382434 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:25:57.396556 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:25:57.400899 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:25:57.401206 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:25:57.404946 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:25:57.412279 tar[1423]: linux-arm64/LICENSE Jun 25 18:25:57.412352 tar[1423]: linux-arm64/README.md Jun 25 18:25:57.418942 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:25:57.420605 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:25:57.424663 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:25:57.426763 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 18:25:57.428110 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:25:58.363123 systemd-networkd[1376]: eth0: Gained IPv6LL Jun 25 18:25:58.365827 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:25:58.367455 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:25:58.382197 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:25:58.386282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:25:58.388803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:25:58.402773 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:25:58.402994 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:25:58.405368 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:25:58.415561 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:25:58.953064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:25:58.954475 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:25:58.956491 (kubelet)[1519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:25:58.956670 systemd[1]: Startup finished in 529ms (kernel) + 4.605s (initrd) + 3.681s (userspace) = 8.816s. Jun 25 18:25:59.433625 kubelet[1519]: E0625 18:25:59.431733 1519 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:25:59.436390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:25:59.436537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:26:03.073410 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:26:03.074539 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:53954.service - OpenSSH per-connection server daemon (10.0.0.1:53954). Jun 25 18:26:03.121670 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 53954 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:26:03.123148 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:03.132743 systemd-logind[1415]: New session 1 of user core. Jun 25 18:26:03.133672 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:26:03.143129 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:26:03.152914 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:26:03.155051 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:26:03.160951 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:03.231750 systemd[1537]: Queued start job for default target default.target. Jun 25 18:26:03.241745 systemd[1537]: Created slice app.slice - User Application Slice. Jun 25 18:26:03.241773 systemd[1537]: Reached target paths.target - Paths. Jun 25 18:26:03.241785 systemd[1537]: Reached target timers.target - Timers. Jun 25 18:26:03.243005 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:26:03.252422 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:26:03.252471 systemd[1537]: Reached target sockets.target - Sockets. Jun 25 18:26:03.252482 systemd[1537]: Reached target basic.target - Basic System. Jun 25 18:26:03.252515 systemd[1537]: Reached target default.target - Main User Target. Jun 25 18:26:03.252540 systemd[1537]: Startup finished in 86ms. Jun 25 18:26:03.252894 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:26:03.254313 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:26:03.313678 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:53956.service - OpenSSH per-connection server daemon (10.0.0.1:53956). Jun 25 18:26:03.348418 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 53956 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:26:03.349547 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:03.353756 systemd-logind[1415]: New session 2 of user core. Jun 25 18:26:03.364061 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:26:03.417089 sshd[1548]: pam_unix(sshd:session): session closed for user core Jun 25 18:26:03.427449 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:53956.service: Deactivated successfully. Jun 25 18:26:03.429092 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:26:03.430685 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:26:03.431935 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:53964.service - OpenSSH per-connection server daemon (10.0.0.1:53964). Jun 25 18:26:03.432790 systemd-logind[1415]: Removed session 2. Jun 25 18:26:03.467161 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 53964 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:26:03.468287 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:03.472336 systemd-logind[1415]: New session 3 of user core. Jun 25 18:26:03.484056 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:26:03.531306 sshd[1555]: pam_unix(sshd:session): session closed for user core Jun 25 18:26:03.540158 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:53964.service: Deactivated successfully. Jun 25 18:26:03.541622 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:26:03.542901 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:26:03.543991 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:53976.service - OpenSSH per-connection server daemon (10.0.0.1:53976). Jun 25 18:26:03.544799 systemd-logind[1415]: Removed session 3. Jun 25 18:26:03.578419 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 53976 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:26:03.579525 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:03.583125 systemd-logind[1415]: New session 4 of user core. Jun 25 18:26:03.594007 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:26:03.644153 sshd[1562]: pam_unix(sshd:session): session closed for user core Jun 25 18:26:03.656985 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:53976.service: Deactivated successfully. Jun 25 18:26:03.658326 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:26:03.659757 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:26:03.660444 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:53984.service - OpenSSH per-connection server daemon (10.0.0.1:53984). Jun 25 18:26:03.662393 systemd-logind[1415]: Removed session 4. Jun 25 18:26:03.696569 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 53984 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:26:03.697782 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:03.704536 systemd-logind[1415]: New session 5 of user core. Jun 25 18:26:03.716081 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:26:03.779226 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:26:03.779446 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:26:03.795603 sudo[1572]: pam_unix(sudo:session): session closed for user root Jun 25 18:26:03.798103 sshd[1569]: pam_unix(sshd:session): session closed for user core Jun 25 18:26:03.808143 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:53984.service: Deactivated successfully. Jun 25 18:26:03.809377 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:26:03.811698 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:26:03.822274 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:53996.service - OpenSSH per-connection server daemon (10.0.0.1:53996). Jun 25 18:26:03.824933 systemd-logind[1415]: Removed session 5. Jun 25 18:26:03.854265 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 53996 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:26:03.855649 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:03.861068 systemd-logind[1415]: New session 6 of user core. Jun 25 18:26:03.872353 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:26:03.924627 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:26:03.924895 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:26:03.928027 sudo[1582]: pam_unix(sudo:session): session closed for user root Jun 25 18:26:03.932521 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:26:03.933043 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:26:03.950209 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:26:03.951253 auditctl[1585]: No rules Jun 25 18:26:03.952076 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:26:03.952971 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:26:03.954542 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:26:03.976253 augenrules[1603]: No rules Jun 25 18:26:03.977342 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:26:03.978427 sudo[1581]: pam_unix(sudo:session): session closed for user root Jun 25 18:26:03.979915 sshd[1577]: pam_unix(sshd:session): session closed for user core Jun 25 18:26:03.989188 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:53996.service: Deactivated successfully. Jun 25 18:26:03.990477 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:26:03.991763 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:26:03.992795 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:53998.service - OpenSSH per-connection server daemon (10.0.0.1:53998). Jun 25 18:26:03.993568 systemd-logind[1415]: Removed session 6. Jun 25 18:26:04.027636 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 53998 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:26:04.028834 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:04.032640 systemd-logind[1415]: New session 7 of user core. Jun 25 18:26:04.038046 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:26:04.088607 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:26:04.089142 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:26:04.203116 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:26:04.203243 (dockerd)[1625]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:26:04.439622 dockerd[1625]: time="2024-06-25T18:26:04.439560649Z" level=info msg="Starting up" Jun 25 18:26:04.524675 dockerd[1625]: time="2024-06-25T18:26:04.524571653Z" level=info msg="Loading containers: start." Jun 25 18:26:04.603904 kernel: Initializing XFRM netlink socket Jun 25 18:26:04.658743 systemd-networkd[1376]: docker0: Link UP Jun 25 18:26:04.666643 dockerd[1625]: time="2024-06-25T18:26:04.666606883Z" level=info msg="Loading containers: done." Jun 25 18:26:04.720778 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2312000683-merged.mount: Deactivated successfully. Jun 25 18:26:04.722556 dockerd[1625]: time="2024-06-25T18:26:04.722511846Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:26:04.722715 dockerd[1625]: time="2024-06-25T18:26:04.722688899Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:26:04.722825 dockerd[1625]: time="2024-06-25T18:26:04.722796335Z" level=info msg="Daemon has completed initialization" Jun 25 18:26:04.745922 dockerd[1625]: time="2024-06-25T18:26:04.745850696Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:26:04.747294 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:26:05.331382 containerd[1436]: time="2024-06-25T18:26:05.331335227Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 25 18:26:05.943472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767530841.mount: Deactivated successfully. Jun 25 18:26:07.258050 containerd[1436]: time="2024-06-25T18:26:07.257991316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:07.258468 containerd[1436]: time="2024-06-25T18:26:07.258425645Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256349" Jun 25 18:26:07.259333 containerd[1436]: time="2024-06-25T18:26:07.259296795Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:07.262386 containerd[1436]: time="2024-06-25T18:26:07.262354899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:07.263701 containerd[1436]: time="2024-06-25T18:26:07.263546306Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 1.932167669s" Jun 25 18:26:07.263701 containerd[1436]: time="2024-06-25T18:26:07.263587772Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jun 25 18:26:07.282806 containerd[1436]: time="2024-06-25T18:26:07.282771663Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 25 18:26:08.997985 containerd[1436]: time="2024-06-25T18:26:08.997667020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:09.000322 containerd[1436]: time="2024-06-25T18:26:09.000278021Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228086" Jun 25 18:26:09.001075 containerd[1436]: time="2024-06-25T18:26:09.001016588Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:09.004223 containerd[1436]: time="2024-06-25T18:26:09.004178929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:09.005606 containerd[1436]: time="2024-06-25T18:26:09.005190994Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 1.722383064s" Jun 25 18:26:09.005606 containerd[1436]: time="2024-06-25T18:26:09.005226418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jun 25 18:26:09.026017 containerd[1436]: time="2024-06-25T18:26:09.025977971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 18:26:09.553352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:26:09.568351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:26:09.661298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:26:09.664701 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:26:09.714615 kubelet[1846]: E0625 18:26:09.714276 1846 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:26:09.718972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:26:09.719118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:26:10.052927 containerd[1436]: time="2024-06-25T18:26:10.052806022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:10.054060 containerd[1436]: time="2024-06-25T18:26:10.053990254Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578350" Jun 25 18:26:10.054714 containerd[1436]: time="2024-06-25T18:26:10.054673823Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:10.059004 containerd[1436]: time="2024-06-25T18:26:10.058944993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:10.059812 containerd[1436]: time="2024-06-25T18:26:10.059720648Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 1.03370137s" Jun 25 18:26:10.059812 containerd[1436]: time="2024-06-25T18:26:10.059760438Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jun 25 18:26:10.079693 containerd[1436]: time="2024-06-25T18:26:10.079657103Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 25 18:26:11.053380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1266326076.mount: Deactivated successfully. Jun 25 18:26:11.408782 containerd[1436]: time="2024-06-25T18:26:11.408658622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:11.409937 containerd[1436]: time="2024-06-25T18:26:11.409708560Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052712" Jun 25 18:26:11.410660 containerd[1436]: time="2024-06-25T18:26:11.410622962Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:11.412530 containerd[1436]: time="2024-06-25T18:26:11.412499264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:11.413140 containerd[1436]: time="2024-06-25T18:26:11.413107144Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 1.333410401s" Jun 25 18:26:11.413186 containerd[1436]: time="2024-06-25T18:26:11.413139696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jun 25 18:26:11.435333 containerd[1436]: time="2024-06-25T18:26:11.435104238Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:26:11.987567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217864944.mount: Deactivated successfully. Jun 25 18:26:12.724173 containerd[1436]: time="2024-06-25T18:26:12.724111651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:12.724668 containerd[1436]: time="2024-06-25T18:26:12.724623553Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jun 25 18:26:12.725539 containerd[1436]: time="2024-06-25T18:26:12.725501255Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:12.728566 containerd[1436]: time="2024-06-25T18:26:12.728533491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:12.729756 containerd[1436]: time="2024-06-25T18:26:12.729707100Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.294563784s" Jun 25 18:26:12.729756 containerd[1436]: time="2024-06-25T18:26:12.729748589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jun 25 18:26:12.748430 containerd[1436]: time="2024-06-25T18:26:12.748391296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:26:13.204116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275455021.mount: Deactivated successfully. Jun 25 18:26:13.209356 containerd[1436]: time="2024-06-25T18:26:13.208665275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:13.209356 containerd[1436]: time="2024-06-25T18:26:13.209337832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jun 25 18:26:13.209954 containerd[1436]: time="2024-06-25T18:26:13.209924250Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:13.212733 containerd[1436]: time="2024-06-25T18:26:13.212696083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:13.213590 containerd[1436]: time="2024-06-25T18:26:13.213382772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 464.955272ms" Jun 25 18:26:13.213590 containerd[1436]: time="2024-06-25T18:26:13.213415651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 18:26:13.231564 containerd[1436]: time="2024-06-25T18:26:13.231528097Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:26:13.783248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282326685.mount: Deactivated successfully. Jun 25 18:26:15.640710 containerd[1436]: time="2024-06-25T18:26:15.640661021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:15.641575 containerd[1436]: time="2024-06-25T18:26:15.641142863Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jun 25 18:26:15.643902 containerd[1436]: time="2024-06-25T18:26:15.642403797Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:15.645830 containerd[1436]: time="2024-06-25T18:26:15.645798327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:15.647168 containerd[1436]: time="2024-06-25T18:26:15.647142551Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.415578898s" Jun 25 18:26:15.647240 containerd[1436]: time="2024-06-25T18:26:15.647172917Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 18:26:19.803431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:26:19.814118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:26:19.868692 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:26:19.868755 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:26:19.868976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:26:19.882279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:26:19.899476 systemd[1]: Reloading requested from client PID 2068 ('systemctl') (unit session-7.scope)... Jun 25 18:26:19.899493 systemd[1]: Reloading... Jun 25 18:26:19.963019 zram_generator::config[2108]: No configuration found. Jun 25 18:26:20.050833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:26:20.103801 systemd[1]: Reloading finished in 203 ms. Jun 25 18:26:20.146580 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:26:20.146642 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:26:20.147969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:26:20.150179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:26:20.241671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:26:20.246204 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:26:20.291321 kubelet[2150]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:26:20.291321 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:26:20.291321 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:26:20.293310 kubelet[2150]: I0625 18:26:20.293187 2150 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:26:21.591067 kubelet[2150]: I0625 18:26:21.590968 2150 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 18:26:21.591067 kubelet[2150]: I0625 18:26:21.591004 2150 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:26:21.591464 kubelet[2150]: I0625 18:26:21.591204 2150 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 18:26:21.623565 kubelet[2150]: I0625 18:26:21.623434 2150 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:26:21.623565 kubelet[2150]: E0625 18:26:21.623480 2150 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.639012 kubelet[2150]: I0625 18:26:21.638987 2150 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:26:21.640507 kubelet[2150]: I0625 18:26:21.640130 2150 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:26:21.640507 kubelet[2150]: I0625 18:26:21.640312 2150 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:26:21.640507 kubelet[2150]: I0625 18:26:21.640330 2150 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:26:21.640507 kubelet[2150]: I0625 18:26:21.640338 2150 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:26:21.641582 kubelet[2150]: I0625 18:26:21.641559 2150 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:26:21.643755 kubelet[2150]: I0625 18:26:21.643734 2150 kubelet.go:396] "Attempting to sync node with API server" Jun 25 18:26:21.643864 kubelet[2150]: I0625 18:26:21.643852 2150 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:26:21.643977 kubelet[2150]: I0625 18:26:21.643959 2150 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:26:21.644007 kubelet[2150]: I0625 18:26:21.643987 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:26:21.644517 kubelet[2150]: W0625 18:26:21.644452 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.644517 kubelet[2150]: E0625 18:26:21.644511 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.645567 kubelet[2150]: W0625 18:26:21.645429 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.645567 kubelet[2150]: E0625 18:26:21.645494 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.646038 kubelet[2150]: I0625 18:26:21.645665 2150 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:26:21.646127 kubelet[2150]: I0625 18:26:21.646099 2150 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:26:21.646642 kubelet[2150]: W0625 18:26:21.646610 2150 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:26:21.648837 kubelet[2150]: I0625 18:26:21.648759 2150 server.go:1256] "Started kubelet" Jun 25 18:26:21.648837 kubelet[2150]: I0625 18:26:21.648833 2150 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:26:21.649562 kubelet[2150]: I0625 18:26:21.649526 2150 server.go:461] "Adding debug handlers to kubelet server" Jun 25 18:26:21.650923 kubelet[2150]: I0625 18:26:21.650395 2150 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:26:21.650923 kubelet[2150]: I0625 18:26:21.650716 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:26:21.650923 kubelet[2150]: I0625 18:26:21.650777 2150 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:26:21.652004 kubelet[2150]: E0625 18:26:21.651976 2150 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:26:21.652056 kubelet[2150]: E0625 18:26:21.652030 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:21.652056 kubelet[2150]: I0625 18:26:21.652050 2150 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:26:21.652146 kubelet[2150]: I0625 18:26:21.652129 2150 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:26:21.652188 kubelet[2150]: I0625 18:26:21.652177 2150 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:26:21.652481 kubelet[2150]: E0625 18:26:21.652452 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Jun 25 18:26:21.652546 kubelet[2150]: W0625 18:26:21.652518 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.652572 kubelet[2150]: E0625 18:26:21.652547 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.653171 kubelet[2150]: I0625 18:26:21.653139 2150 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:26:21.653256 kubelet[2150]: I0625 18:26:21.653238 2150 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:26:21.654970 kubelet[2150]: I0625 18:26:21.654944 2150 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:26:21.655171 kubelet[2150]: E0625 18:26:21.655152 2150 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc529a96378ee4 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 18:26:21.647351524 +0000 UTC m=+1.398068334,LastTimestamp:2024-06-25 18:26:21.647351524 +0000 UTC m=+1.398068334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 18:26:21.668107 kubelet[2150]: I0625 18:26:21.668082 2150 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:26:21.668107 kubelet[2150]: I0625 18:26:21.668101 2150 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:26:21.668198 kubelet[2150]: I0625 18:26:21.668118 2150 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:26:21.669820 kubelet[2150]: I0625 18:26:21.669785 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:26:21.670798 kubelet[2150]: I0625 18:26:21.670772 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:26:21.670798 kubelet[2150]: I0625 18:26:21.670794 2150 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:26:21.670874 kubelet[2150]: I0625 18:26:21.670811 2150 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 18:26:21.671088 kubelet[2150]: E0625 18:26:21.671060 2150 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:26:21.671865 kubelet[2150]: W0625 18:26:21.671810 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.671865 kubelet[2150]: E0625 18:26:21.671863 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:21.753749 kubelet[2150]: I0625 18:26:21.753691 2150 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:26:21.754180 kubelet[2150]: E0625 18:26:21.754151 2150 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jun 25 18:26:21.771383 kubelet[2150]: E0625 18:26:21.771345 2150 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:26:21.796600 kubelet[2150]: I0625 18:26:21.796564 2150 policy_none.go:49] "None policy: Start" Jun 25 18:26:21.797200 kubelet[2150]: I0625 18:26:21.797174 2150 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:26:21.797241 kubelet[2150]: I0625 18:26:21.797217 2150 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:26:21.802844 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:26:21.817195 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:26:21.820420 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:26:21.831097 kubelet[2150]: I0625 18:26:21.830635 2150 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:26:21.831097 kubelet[2150]: I0625 18:26:21.830914 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:26:21.833351 kubelet[2150]: E0625 18:26:21.833318 2150 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:26:21.853607 kubelet[2150]: E0625 18:26:21.853491 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Jun 25 18:26:21.955983 kubelet[2150]: I0625 18:26:21.955941 2150 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:26:21.956273 kubelet[2150]: E0625 18:26:21.956253 2150 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jun 25 18:26:21.972473 kubelet[2150]: I0625 18:26:21.972386 2150 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:26:21.974188 kubelet[2150]: I0625 18:26:21.973355 2150 topology_manager.go:215] "Topology Admit Handler" podUID="828e03a89552ce0b801d7db70d5b32da" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:26:21.974956 kubelet[2150]: I0625 18:26:21.974801 2150 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:26:21.981792 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice - libcontainer container kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jun 25 18:26:22.007932 systemd[1]: Created slice kubepods-burstable-pod828e03a89552ce0b801d7db70d5b32da.slice - libcontainer container kubepods-burstable-pod828e03a89552ce0b801d7db70d5b32da.slice. Jun 25 18:26:22.022556 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice - libcontainer container kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jun 25 18:26:22.054477 kubelet[2150]: I0625 18:26:22.054430 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:26:22.054477 kubelet[2150]: I0625 18:26:22.054470 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/828e03a89552ce0b801d7db70d5b32da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"828e03a89552ce0b801d7db70d5b32da\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:26:22.054477 kubelet[2150]: I0625 18:26:22.054492 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/828e03a89552ce0b801d7db70d5b32da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"828e03a89552ce0b801d7db70d5b32da\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:26:22.054649 kubelet[2150]: I0625 18:26:22.054513 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:22.054649 kubelet[2150]: I0625 18:26:22.054532 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:22.054649 kubelet[2150]: I0625 18:26:22.054549 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/828e03a89552ce0b801d7db70d5b32da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"828e03a89552ce0b801d7db70d5b32da\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:26:22.054649 kubelet[2150]: I0625 18:26:22.054568 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:22.054649 kubelet[2150]: I0625 18:26:22.054587 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:22.054753 kubelet[2150]: I0625 18:26:22.054604 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:22.254801 kubelet[2150]: E0625 18:26:22.254689 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Jun 25 18:26:22.310205 kubelet[2150]: E0625 18:26:22.309953 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:22.310747 containerd[1436]: time="2024-06-25T18:26:22.310714222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jun 25 18:26:22.321236 kubelet[2150]: E0625 18:26:22.320961 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:22.325279 kubelet[2150]: E0625 18:26:22.325255 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:22.325702 containerd[1436]: time="2024-06-25T18:26:22.325669832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:828e03a89552ce0b801d7db70d5b32da,Namespace:kube-system,Attempt:0,}" Jun 25 18:26:22.329665 containerd[1436]: time="2024-06-25T18:26:22.329636032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jun 25 18:26:22.358429 kubelet[2150]: I0625 18:26:22.358360 2150 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:26:22.358736 kubelet[2150]: E0625 18:26:22.358705 2150 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jun 25 18:26:22.570226 kubelet[2150]: W0625 18:26:22.570162 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:22.570226 kubelet[2150]: E0625 18:26:22.570225 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:22.598696 kubelet[2150]: W0625 18:26:22.598634 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:22.598696 kubelet[2150]: E0625 18:26:22.598676 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:22.781573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321421938.mount: Deactivated successfully. Jun 25 18:26:22.788912 containerd[1436]: time="2024-06-25T18:26:22.788860141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:26:22.790678 containerd[1436]: time="2024-06-25T18:26:22.790608345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:26:22.791210 containerd[1436]: time="2024-06-25T18:26:22.791163932Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:26:22.792838 containerd[1436]: time="2024-06-25T18:26:22.792772819Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:26:22.794383 containerd[1436]: time="2024-06-25T18:26:22.793641389Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:26:22.794383 containerd[1436]: time="2024-06-25T18:26:22.794007774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jun 25 18:26:22.794824 containerd[1436]: time="2024-06-25T18:26:22.794792746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:26:22.795685 containerd[1436]: time="2024-06-25T18:26:22.795659239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:26:22.798132 containerd[1436]: time="2024-06-25T18:26:22.798094278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.331422ms" Jun 25 18:26:22.800078 containerd[1436]: time="2024-06-25T18:26:22.800024296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 470.211962ms" Jun 25 18:26:22.800584 containerd[1436]: time="2024-06-25T18:26:22.800547730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.38165ms" Jun 25 18:26:22.831843 kubelet[2150]: W0625 18:26:22.831716 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:22.832026 kubelet[2150]: E0625 18:26:22.832004 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:22.974637 containerd[1436]: time="2024-06-25T18:26:22.974198078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:26:22.974637 containerd[1436]: time="2024-06-25T18:26:22.974261506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:22.974637 containerd[1436]: time="2024-06-25T18:26:22.974275166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:26:22.974637 containerd[1436]: time="2024-06-25T18:26:22.974284951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:22.975338 containerd[1436]: time="2024-06-25T18:26:22.975005937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:26:22.975338 containerd[1436]: time="2024-06-25T18:26:22.975095326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:22.975338 containerd[1436]: time="2024-06-25T18:26:22.975110025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:26:22.975338 containerd[1436]: time="2024-06-25T18:26:22.975119171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:22.975637 containerd[1436]: time="2024-06-25T18:26:22.975563322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:26:22.975715 containerd[1436]: time="2024-06-25T18:26:22.975617403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:22.975715 containerd[1436]: time="2024-06-25T18:26:22.975631782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:26:22.975715 containerd[1436]: time="2024-06-25T18:26:22.975641847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:22.998117 systemd[1]: Started cri-containerd-afe6ab142be1b9814784914aaa41307893fd20d515f792f8e9ae5e3bd68da3c6.scope - libcontainer container afe6ab142be1b9814784914aaa41307893fd20d515f792f8e9ae5e3bd68da3c6. Jun 25 18:26:23.002537 systemd[1]: Started cri-containerd-69375a77cd8fa928089330e00d0d09f220c7503900d0117cc0160eccfaa2e266.scope - libcontainer container 69375a77cd8fa928089330e00d0d09f220c7503900d0117cc0160eccfaa2e266. Jun 25 18:26:23.004333 systemd[1]: Started cri-containerd-f25f61188b013a8c44c93cbf61b0ad6c9f27841c07eb6e4d7f38034cb7ece69d.scope - libcontainer container f25f61188b013a8c44c93cbf61b0ad6c9f27841c07eb6e4d7f38034cb7ece69d. Jun 25 18:26:23.032160 containerd[1436]: time="2024-06-25T18:26:23.032116299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:828e03a89552ce0b801d7db70d5b32da,Namespace:kube-system,Attempt:0,} returns sandbox id \"afe6ab142be1b9814784914aaa41307893fd20d515f792f8e9ae5e3bd68da3c6\"" Jun 25 18:26:23.033241 kubelet[2150]: E0625 18:26:23.033216 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:23.037343 containerd[1436]: time="2024-06-25T18:26:23.037300587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"69375a77cd8fa928089330e00d0d09f220c7503900d0117cc0160eccfaa2e266\"" Jun 25 18:26:23.038075 kubelet[2150]: E0625 18:26:23.037871 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:23.040155 containerd[1436]: time="2024-06-25T18:26:23.040007404Z" level=info msg="CreateContainer within sandbox \"afe6ab142be1b9814784914aaa41307893fd20d515f792f8e9ae5e3bd68da3c6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:26:23.040307 containerd[1436]: time="2024-06-25T18:26:23.040044677Z" level=info msg="CreateContainer within sandbox \"69375a77cd8fa928089330e00d0d09f220c7503900d0117cc0160eccfaa2e266\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:26:23.048824 containerd[1436]: time="2024-06-25T18:26:23.048781580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f25f61188b013a8c44c93cbf61b0ad6c9f27841c07eb6e4d7f38034cb7ece69d\"" Jun 25 18:26:23.050214 kubelet[2150]: E0625 18:26:23.050189 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:23.052000 containerd[1436]: time="2024-06-25T18:26:23.051966465Z" level=info msg="CreateContainer within sandbox \"f25f61188b013a8c44c93cbf61b0ad6c9f27841c07eb6e4d7f38034cb7ece69d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:26:23.055898 kubelet[2150]: E0625 18:26:23.055792 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Jun 25 18:26:23.055962 containerd[1436]: time="2024-06-25T18:26:23.055893481Z" level=info msg="CreateContainer within sandbox \"69375a77cd8fa928089330e00d0d09f220c7503900d0117cc0160eccfaa2e266\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c86b892b8e64521e2dfcf6ee12bf096c77c07b54545d4e8d414f09ef234e0253\"" Jun 25 18:26:23.056702 containerd[1436]: time="2024-06-25T18:26:23.056636052Z" level=info msg="StartContainer for \"c86b892b8e64521e2dfcf6ee12bf096c77c07b54545d4e8d414f09ef234e0253\"" Jun 25 18:26:23.062246 containerd[1436]: time="2024-06-25T18:26:23.062203449Z" level=info msg="CreateContainer within sandbox \"afe6ab142be1b9814784914aaa41307893fd20d515f792f8e9ae5e3bd68da3c6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d25490ae5283aa80971f6015149c32ac11e34025384dad437886a9df41f5e621\"" Jun 25 18:26:23.063518 containerd[1436]: time="2024-06-25T18:26:23.063459762Z" level=info msg="StartContainer for \"d25490ae5283aa80971f6015149c32ac11e34025384dad437886a9df41f5e621\"" Jun 25 18:26:23.071826 containerd[1436]: time="2024-06-25T18:26:23.071783633Z" level=info msg="CreateContainer within sandbox \"f25f61188b013a8c44c93cbf61b0ad6c9f27841c07eb6e4d7f38034cb7ece69d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b54ba12a0a527009778d6899de206e660052fbda279eb3f7abbf7af98db26192\"" Jun 25 18:26:23.074294 containerd[1436]: time="2024-06-25T18:26:23.073137741Z" level=info msg="StartContainer for \"b54ba12a0a527009778d6899de206e660052fbda279eb3f7abbf7af98db26192\"" Jun 25 18:26:23.083166 systemd[1]: Started cri-containerd-c86b892b8e64521e2dfcf6ee12bf096c77c07b54545d4e8d414f09ef234e0253.scope - libcontainer container c86b892b8e64521e2dfcf6ee12bf096c77c07b54545d4e8d414f09ef234e0253. Jun 25 18:26:23.087789 systemd[1]: Started cri-containerd-d25490ae5283aa80971f6015149c32ac11e34025384dad437886a9df41f5e621.scope - libcontainer container d25490ae5283aa80971f6015149c32ac11e34025384dad437886a9df41f5e621. Jun 25 18:26:23.097235 kubelet[2150]: W0625 18:26:23.097069 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:23.097235 kubelet[2150]: E0625 18:26:23.097241 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jun 25 18:26:23.103095 systemd[1]: Started cri-containerd-b54ba12a0a527009778d6899de206e660052fbda279eb3f7abbf7af98db26192.scope - libcontainer container b54ba12a0a527009778d6899de206e660052fbda279eb3f7abbf7af98db26192. Jun 25 18:26:23.151033 containerd[1436]: time="2024-06-25T18:26:23.144563726Z" level=info msg="StartContainer for \"d25490ae5283aa80971f6015149c32ac11e34025384dad437886a9df41f5e621\" returns successfully" Jun 25 18:26:23.151033 containerd[1436]: time="2024-06-25T18:26:23.145334380Z" level=info msg="StartContainer for \"c86b892b8e64521e2dfcf6ee12bf096c77c07b54545d4e8d414f09ef234e0253\" returns successfully" Jun 25 18:26:23.162161 kubelet[2150]: I0625 18:26:23.160395 2150 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:26:23.162161 kubelet[2150]: E0625 18:26:23.160846 2150 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jun 25 18:26:23.176641 containerd[1436]: time="2024-06-25T18:26:23.171636692Z" level=info msg="StartContainer for \"b54ba12a0a527009778d6899de206e660052fbda279eb3f7abbf7af98db26192\" returns successfully" Jun 25 18:26:23.679359 kubelet[2150]: E0625 18:26:23.679325 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:23.684625 kubelet[2150]: E0625 18:26:23.682619 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:23.684625 kubelet[2150]: E0625 18:26:23.684403 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:24.661002 kubelet[2150]: E0625 18:26:24.660964 2150 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:26:24.686609 kubelet[2150]: E0625 18:26:24.686577 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:24.753456 kubelet[2150]: E0625 18:26:24.753411 2150 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 18:26:24.762608 kubelet[2150]: I0625 18:26:24.762557 2150 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:26:24.771529 kubelet[2150]: I0625 18:26:24.771400 2150 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:26:24.777300 kubelet[2150]: E0625 18:26:24.777268 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:24.878140 kubelet[2150]: E0625 18:26:24.878103 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:24.978823 kubelet[2150]: E0625 18:26:24.978741 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.079280 kubelet[2150]: E0625 18:26:25.079238 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.180030 kubelet[2150]: E0625 18:26:25.179980 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.280951 kubelet[2150]: E0625 18:26:25.280836 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.382204 kubelet[2150]: E0625 18:26:25.381970 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.483010 kubelet[2150]: E0625 18:26:25.482912 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.583642 kubelet[2150]: E0625 18:26:25.583478 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.684471 kubelet[2150]: E0625 18:26:25.684420 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.785524 kubelet[2150]: E0625 18:26:25.785488 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.886202 kubelet[2150]: E0625 18:26:25.886068 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:25.986702 kubelet[2150]: E0625 18:26:25.986667 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:26.079296 kubelet[2150]: E0625 18:26:26.079234 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:26.647696 kubelet[2150]: I0625 18:26:26.647657 2150 apiserver.go:52] "Watching apiserver" Jun 25 18:26:26.652479 kubelet[2150]: I0625 18:26:26.652428 2150 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:26:26.688226 kubelet[2150]: E0625 18:26:26.688207 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:26.879235 systemd[1]: Reloading requested from client PID 2432 ('systemctl') (unit session-7.scope)... Jun 25 18:26:26.879253 systemd[1]: Reloading... Jun 25 18:26:26.939912 zram_generator::config[2475]: No configuration found. Jun 25 18:26:27.015176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:26:27.079131 systemd[1]: Reloading finished in 199 ms. Jun 25 18:26:27.109332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:26:27.125687 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:26:27.125936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:26:27.126056 systemd[1]: kubelet.service: Consumed 1.699s CPU time, 113.2M memory peak, 0B memory swap peak. Jun 25 18:26:27.134154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:26:27.222130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:26:27.225956 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:26:27.274068 kubelet[2511]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:26:27.274068 kubelet[2511]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:26:27.274068 kubelet[2511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:26:27.274904 kubelet[2511]: I0625 18:26:27.274398 2511 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:26:27.281870 kubelet[2511]: I0625 18:26:27.278137 2511 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 18:26:27.281870 kubelet[2511]: I0625 18:26:27.278158 2511 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:26:27.281870 kubelet[2511]: I0625 18:26:27.278332 2511 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 18:26:27.281870 kubelet[2511]: I0625 18:26:27.279837 2511 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:26:27.281870 kubelet[2511]: I0625 18:26:27.281687 2511 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:26:27.289826 kubelet[2511]: I0625 18:26:27.289789 2511 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:26:27.290010 kubelet[2511]: I0625 18:26:27.289982 2511 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:26:27.290160 kubelet[2511]: I0625 18:26:27.290139 2511 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:26:27.290160 kubelet[2511]: I0625 18:26:27.290160 2511 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:26:27.290262 kubelet[2511]: I0625 18:26:27.290168 2511 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:26:27.290262 kubelet[2511]: I0625 18:26:27.290193 2511 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:26:27.290307 kubelet[2511]: I0625 18:26:27.290274 2511 kubelet.go:396] "Attempting to sync node with API server" Jun 25 18:26:27.290307 kubelet[2511]: I0625 18:26:27.290288 2511 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:26:27.290307 kubelet[2511]: I0625 18:26:27.290302 2511 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:26:27.290364 kubelet[2511]: I0625 18:26:27.290316 2511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:26:27.291764 kubelet[2511]: I0625 18:26:27.291550 2511 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:26:27.292137 kubelet[2511]: I0625 18:26:27.292109 2511 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:26:27.293425 kubelet[2511]: I0625 18:26:27.293388 2511 server.go:1256] "Started kubelet" Jun 25 18:26:27.293701 sudo[2526]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 18:26:27.294229 sudo[2526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 18:26:27.294627 kubelet[2511]: I0625 18:26:27.294307 2511 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:26:27.294691 kubelet[2511]: I0625 18:26:27.294655 2511 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:26:27.294717 kubelet[2511]: I0625 18:26:27.294702 2511 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:26:27.295683 kubelet[2511]: I0625 18:26:27.295652 2511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:26:27.296241 kubelet[2511]: I0625 18:26:27.296143 2511 server.go:461] "Adding debug handlers to kubelet server" Jun 25 18:26:27.299294 kubelet[2511]: I0625 18:26:27.296594 2511 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:26:27.299294 kubelet[2511]: E0625 18:26:27.297025 2511 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:26:27.299294 kubelet[2511]: I0625 18:26:27.297408 2511 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:26:27.299400 kubelet[2511]: I0625 18:26:27.299305 2511 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:26:27.300876 kubelet[2511]: I0625 18:26:27.299606 2511 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:26:27.300876 kubelet[2511]: I0625 18:26:27.299681 2511 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:26:27.303703 kubelet[2511]: E0625 18:26:27.301918 2511 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:26:27.303703 kubelet[2511]: I0625 18:26:27.302439 2511 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:26:27.317034 kubelet[2511]: I0625 18:26:27.316998 2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:26:27.319373 kubelet[2511]: I0625 18:26:27.319349 2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:26:27.319373 kubelet[2511]: I0625 18:26:27.319375 2511 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:26:27.319473 kubelet[2511]: I0625 18:26:27.319397 2511 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 18:26:27.319473 kubelet[2511]: E0625 18:26:27.319447 2511 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:26:27.353675 kubelet[2511]: I0625 18:26:27.353648 2511 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:26:27.353810 kubelet[2511]: I0625 18:26:27.353716 2511 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:26:27.353810 kubelet[2511]: I0625 18:26:27.353736 2511 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:26:27.353940 kubelet[2511]: I0625 18:26:27.353928 2511 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:26:27.353978 kubelet[2511]: I0625 18:26:27.353959 2511 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:26:27.353978 kubelet[2511]: I0625 18:26:27.353967 2511 policy_none.go:49] "None policy: Start" Jun 25 18:26:27.354914 kubelet[2511]: I0625 18:26:27.354468 2511 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:26:27.354914 kubelet[2511]: I0625 18:26:27.354492 2511 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:26:27.354914 kubelet[2511]: I0625 18:26:27.354634 2511 state_mem.go:75] "Updated machine memory state" Jun 25 18:26:27.360098 kubelet[2511]: I0625 18:26:27.360079 2511 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:26:27.360699 kubelet[2511]: I0625 18:26:27.360623 2511 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:26:27.405441 kubelet[2511]: I0625 18:26:27.405297 2511 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:26:27.413585 kubelet[2511]: I0625 18:26:27.413559 2511 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 18:26:27.414039 kubelet[2511]: I0625 18:26:27.413740 2511 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:26:27.422846 kubelet[2511]: I0625 18:26:27.420837 2511 topology_manager.go:215] "Topology Admit Handler" podUID="828e03a89552ce0b801d7db70d5b32da" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:26:27.422949 kubelet[2511]: I0625 18:26:27.422922 2511 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:26:27.423672 kubelet[2511]: I0625 18:26:27.423005 2511 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:26:27.429782 kubelet[2511]: E0625 18:26:27.429754 2511 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:27.500863 kubelet[2511]: I0625 18:26:27.500801 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:27.500863 kubelet[2511]: I0625 18:26:27.500859 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:26:27.500981 kubelet[2511]: I0625 18:26:27.500900 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:27.500981 kubelet[2511]: I0625 18:26:27.500921 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/828e03a89552ce0b801d7db70d5b32da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"828e03a89552ce0b801d7db70d5b32da\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:26:27.500981 kubelet[2511]: I0625 18:26:27.500940 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/828e03a89552ce0b801d7db70d5b32da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"828e03a89552ce0b801d7db70d5b32da\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:26:27.500981 kubelet[2511]: I0625 18:26:27.500958 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:27.500981 kubelet[2511]: I0625 18:26:27.500977 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:27.501103 kubelet[2511]: I0625 18:26:27.500996 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:26:27.501103 kubelet[2511]: I0625 18:26:27.501015 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/828e03a89552ce0b801d7db70d5b32da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"828e03a89552ce0b801d7db70d5b32da\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:26:27.731698 kubelet[2511]: E0625 18:26:27.731644 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:27.732018 kubelet[2511]: E0625 18:26:27.731998 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:27.732418 kubelet[2511]: E0625 18:26:27.732400 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:27.771103 sudo[2526]: pam_unix(sudo:session): session closed for user root Jun 25 18:26:28.290829 kubelet[2511]: I0625 18:26:28.290790 2511 apiserver.go:52] "Watching apiserver" Jun 25 18:26:28.299895 kubelet[2511]: I0625 18:26:28.299864 2511 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:26:28.339555 kubelet[2511]: E0625 18:26:28.339531 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:28.340472 kubelet[2511]: E0625 18:26:28.340437 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:28.342142 kubelet[2511]: E0625 18:26:28.342109 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:28.357477 kubelet[2511]: I0625 18:26:28.357452 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.357420308 podStartE2EDuration="1.357420308s" podCreationTimestamp="2024-06-25 18:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:26:28.357212764 +0000 UTC m=+1.126293809" watchObservedRunningTime="2024-06-25 18:26:28.357420308 +0000 UTC m=+1.126501353" Jun 25 18:26:28.365730 kubelet[2511]: I0625 18:26:28.365695 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.365657706 podStartE2EDuration="1.365657706s" podCreationTimestamp="2024-06-25 18:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:26:28.364575855 +0000 UTC m=+1.133656900" watchObservedRunningTime="2024-06-25 18:26:28.365657706 +0000 UTC m=+1.134738751" Jun 25 18:26:28.373728 kubelet[2511]: I0625 18:26:28.373689 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.373662217 podStartE2EDuration="2.373662217s" podCreationTimestamp="2024-06-25 18:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:26:28.373491369 +0000 UTC m=+1.142572414" watchObservedRunningTime="2024-06-25 18:26:28.373662217 +0000 UTC m=+1.142743262" Jun 25 18:26:29.340920 kubelet[2511]: E0625 18:26:29.340872 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:30.342240 kubelet[2511]: E0625 18:26:30.342213 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:30.735940 sudo[1614]: pam_unix(sudo:session): session closed for user root Jun 25 18:26:30.737425 sshd[1611]: pam_unix(sshd:session): session closed for user core Jun 25 18:26:30.741275 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:26:30.741394 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:53998.service: Deactivated successfully. Jun 25 18:26:30.743524 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:26:30.743862 systemd[1]: session-7.scope: Consumed 7.913s CPU time, 136.0M memory peak, 0B memory swap peak. Jun 25 18:26:30.745111 systemd-logind[1415]: Removed session 7. Jun 25 18:26:32.249273 kubelet[2511]: E0625 18:26:32.249230 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:33.519161 kubelet[2511]: E0625 18:26:33.518136 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:34.349152 kubelet[2511]: E0625 18:26:34.349038 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:39.272103 kubelet[2511]: E0625 18:26:39.271430 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:39.355950 kubelet[2511]: E0625 18:26:39.355872 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:40.718628 kubelet[2511]: I0625 18:26:40.718389 2511 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:26:40.719393 containerd[1436]: time="2024-06-25T18:26:40.718917944Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:26:40.719631 kubelet[2511]: I0625 18:26:40.719429 2511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:26:40.730253 kubelet[2511]: I0625 18:26:40.730191 2511 topology_manager.go:215] "Topology Admit Handler" podUID="9111da85-824c-4ada-a603-ff5419ecc8b2" podNamespace="kube-system" podName="kube-proxy-5pl2f" Jun 25 18:26:40.740412 systemd[1]: Created slice kubepods-besteffort-pod9111da85_824c_4ada_a603_ff5419ecc8b2.slice - libcontainer container kubepods-besteffort-pod9111da85_824c_4ada_a603_ff5419ecc8b2.slice. Jun 25 18:26:40.749477 kubelet[2511]: I0625 18:26:40.749441 2511 topology_manager.go:215] "Topology Admit Handler" podUID="b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" podNamespace="kube-system" podName="cilium-vbd8p" Jun 25 18:26:40.759915 systemd[1]: Created slice kubepods-burstable-podb4f3bdf3_2bbf_4818_9bb3_cec0b3d70b3c.slice - libcontainer container kubepods-burstable-podb4f3bdf3_2bbf_4818_9bb3_cec0b3d70b3c.slice. Jun 25 18:26:40.885787 kubelet[2511]: I0625 18:26:40.885738 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9111da85-824c-4ada-a603-ff5419ecc8b2-kube-proxy\") pod \"kube-proxy-5pl2f\" (UID: \"9111da85-824c-4ada-a603-ff5419ecc8b2\") " pod="kube-system/kube-proxy-5pl2f" Jun 25 18:26:40.885787 kubelet[2511]: I0625 18:26:40.885801 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cni-path\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.885966 kubelet[2511]: I0625 18:26:40.885826 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdlj\" (UniqueName: \"kubernetes.io/projected/9111da85-824c-4ada-a603-ff5419ecc8b2-kube-api-access-svdlj\") pod \"kube-proxy-5pl2f\" (UID: \"9111da85-824c-4ada-a603-ff5419ecc8b2\") " pod="kube-system/kube-proxy-5pl2f" Jun 25 18:26:40.885966 kubelet[2511]: I0625 18:26:40.885903 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krj6p\" (UniqueName: \"kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-kube-api-access-krj6p\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.885966 kubelet[2511]: I0625 18:26:40.885961 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9111da85-824c-4ada-a603-ff5419ecc8b2-lib-modules\") pod \"kube-proxy-5pl2f\" (UID: \"9111da85-824c-4ada-a603-ff5419ecc8b2\") " pod="kube-system/kube-proxy-5pl2f" Jun 25 18:26:40.886031 kubelet[2511]: I0625 18:26:40.885988 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-etc-cni-netd\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886058 kubelet[2511]: I0625 18:26:40.886031 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-lib-modules\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886103 kubelet[2511]: I0625 18:26:40.886090 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-cgroup\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886141 kubelet[2511]: I0625 18:26:40.886132 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9111da85-824c-4ada-a603-ff5419ecc8b2-xtables-lock\") pod \"kube-proxy-5pl2f\" (UID: \"9111da85-824c-4ada-a603-ff5419ecc8b2\") " pod="kube-system/kube-proxy-5pl2f" Jun 25 18:26:40.886176 kubelet[2511]: I0625 18:26:40.886167 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-run\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886198 kubelet[2511]: I0625 18:26:40.886194 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-clustermesh-secrets\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886227 kubelet[2511]: I0625 18:26:40.886215 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-host-proc-sys-net\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886252 kubelet[2511]: I0625 18:26:40.886238 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-host-proc-sys-kernel\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886273 kubelet[2511]: I0625 18:26:40.886258 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-hubble-tls\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886298 kubelet[2511]: I0625 18:26:40.886278 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-bpf-maps\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886329 kubelet[2511]: I0625 18:26:40.886302 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-hostproc\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886329 kubelet[2511]: I0625 18:26:40.886322 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-xtables-lock\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.886384 kubelet[2511]: I0625 18:26:40.886351 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-config-path\") pod \"cilium-vbd8p\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " pod="kube-system/cilium-vbd8p" Jun 25 18:26:40.999125 kubelet[2511]: E0625 18:26:40.999026 2511 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 18:26:40.999125 kubelet[2511]: E0625 18:26:40.999057 2511 projected.go:200] Error preparing data for projected volume kube-api-access-svdlj for pod kube-system/kube-proxy-5pl2f: configmap "kube-root-ca.crt" not found Jun 25 18:26:40.999125 kubelet[2511]: E0625 18:26:40.999112 2511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9111da85-824c-4ada-a603-ff5419ecc8b2-kube-api-access-svdlj podName:9111da85-824c-4ada-a603-ff5419ecc8b2 nodeName:}" failed. No retries permitted until 2024-06-25 18:26:41.499094599 +0000 UTC m=+14.268175644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-svdlj" (UniqueName: "kubernetes.io/projected/9111da85-824c-4ada-a603-ff5419ecc8b2-kube-api-access-svdlj") pod "kube-proxy-5pl2f" (UID: "9111da85-824c-4ada-a603-ff5419ecc8b2") : configmap "kube-root-ca.crt" not found Jun 25 18:26:41.003246 kubelet[2511]: E0625 18:26:41.003205 2511 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 18:26:41.003246 kubelet[2511]: E0625 18:26:41.003231 2511 projected.go:200] Error preparing data for projected volume kube-api-access-krj6p for pod kube-system/cilium-vbd8p: configmap "kube-root-ca.crt" not found Jun 25 18:26:41.003338 kubelet[2511]: E0625 18:26:41.003267 2511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-kube-api-access-krj6p podName:b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c nodeName:}" failed. No retries permitted until 2024-06-25 18:26:41.503255908 +0000 UTC m=+14.272336953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-krj6p" (UniqueName: "kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-kube-api-access-krj6p") pod "cilium-vbd8p" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c") : configmap "kube-root-ca.crt" not found Jun 25 18:26:41.594894 kubelet[2511]: I0625 18:26:41.594833 2511 topology_manager.go:215] "Topology Admit Handler" podUID="f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1" podNamespace="kube-system" podName="cilium-operator-5cc964979-zw95c" Jun 25 18:26:41.614734 systemd[1]: Created slice kubepods-besteffort-podf14a62a6_fd3c_4f8c_bf44_8ee7fbce0ba1.slice - libcontainer container kubepods-besteffort-podf14a62a6_fd3c_4f8c_bf44_8ee7fbce0ba1.slice. Jun 25 18:26:41.647205 kubelet[2511]: E0625 18:26:41.647151 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:41.647901 containerd[1436]: time="2024-06-25T18:26:41.647600637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pl2f,Uid:9111da85-824c-4ada-a603-ff5419ecc8b2,Namespace:kube-system,Attempt:0,}" Jun 25 18:26:41.664479 kubelet[2511]: E0625 18:26:41.664437 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:41.665055 containerd[1436]: time="2024-06-25T18:26:41.665010165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vbd8p,Uid:b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c,Namespace:kube-system,Attempt:0,}" Jun 25 18:26:41.667312 containerd[1436]: time="2024-06-25T18:26:41.667169984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:26:41.667312 containerd[1436]: time="2024-06-25T18:26:41.667247555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:41.667312 containerd[1436]: time="2024-06-25T18:26:41.667274038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:26:41.667899 containerd[1436]: time="2024-06-25T18:26:41.667303282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:41.685097 containerd[1436]: time="2024-06-25T18:26:41.685008932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:26:41.685097 containerd[1436]: time="2024-06-25T18:26:41.685053898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:41.685097 containerd[1436]: time="2024-06-25T18:26:41.685067300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:26:41.685097 containerd[1436]: time="2024-06-25T18:26:41.685077181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:41.686051 systemd[1]: Started cri-containerd-97fffd33a10c3fa822053a2a8b221f5ee4527c2725046d932bc88405fd49edc9.scope - libcontainer container 97fffd33a10c3fa822053a2a8b221f5ee4527c2725046d932bc88405fd49edc9. Jun 25 18:26:41.694095 kubelet[2511]: I0625 18:26:41.694066 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgn2b\" (UniqueName: \"kubernetes.io/projected/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1-kube-api-access-lgn2b\") pod \"cilium-operator-5cc964979-zw95c\" (UID: \"f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1\") " pod="kube-system/cilium-operator-5cc964979-zw95c" Jun 25 18:26:41.694217 kubelet[2511]: I0625 18:26:41.694111 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1-cilium-config-path\") pod \"cilium-operator-5cc964979-zw95c\" (UID: \"f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1\") " pod="kube-system/cilium-operator-5cc964979-zw95c" Jun 25 18:26:41.704032 systemd[1]: Started cri-containerd-15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891.scope - libcontainer container 15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891. Jun 25 18:26:41.707971 containerd[1436]: time="2024-06-25T18:26:41.707896137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pl2f,Uid:9111da85-824c-4ada-a603-ff5419ecc8b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"97fffd33a10c3fa822053a2a8b221f5ee4527c2725046d932bc88405fd49edc9\"" Jun 25 18:26:41.708814 kubelet[2511]: E0625 18:26:41.708789 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:41.711299 containerd[1436]: time="2024-06-25T18:26:41.711248961Z" level=info msg="CreateContainer within sandbox \"97fffd33a10c3fa822053a2a8b221f5ee4527c2725046d932bc88405fd49edc9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:26:41.728960 containerd[1436]: time="2024-06-25T18:26:41.728923806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vbd8p,Uid:b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\"" Jun 25 18:26:41.729386 containerd[1436]: time="2024-06-25T18:26:41.729307739Z" level=info msg="CreateContainer within sandbox \"97fffd33a10c3fa822053a2a8b221f5ee4527c2725046d932bc88405fd49edc9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7a9b7e5ea0bf79aa654db3e32accf01bec683489261a6feb6a274320d3e5be7a\"" Jun 25 18:26:41.729918 containerd[1436]: time="2024-06-25T18:26:41.729870777Z" level=info msg="StartContainer for \"7a9b7e5ea0bf79aa654db3e32accf01bec683489261a6feb6a274320d3e5be7a\"" Jun 25 18:26:41.730314 kubelet[2511]: E0625 18:26:41.730252 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:41.732906 containerd[1436]: time="2024-06-25T18:26:41.732422370Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 18:26:41.756095 systemd[1]: Started cri-containerd-7a9b7e5ea0bf79aa654db3e32accf01bec683489261a6feb6a274320d3e5be7a.scope - libcontainer container 7a9b7e5ea0bf79aa654db3e32accf01bec683489261a6feb6a274320d3e5be7a. Jun 25 18:26:41.778691 containerd[1436]: time="2024-06-25T18:26:41.778585196Z" level=info msg="StartContainer for \"7a9b7e5ea0bf79aa654db3e32accf01bec683489261a6feb6a274320d3e5be7a\" returns successfully" Jun 25 18:26:41.917773 kubelet[2511]: E0625 18:26:41.917350 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:41.917874 containerd[1436]: time="2024-06-25T18:26:41.917822496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zw95c,Uid:f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1,Namespace:kube-system,Attempt:0,}" Jun 25 18:26:41.936465 containerd[1436]: time="2024-06-25T18:26:41.936224041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:26:41.936465 containerd[1436]: time="2024-06-25T18:26:41.936273488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:41.936465 containerd[1436]: time="2024-06-25T18:26:41.936291810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:26:41.936465 containerd[1436]: time="2024-06-25T18:26:41.936305132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:26:41.955093 systemd[1]: Started cri-containerd-7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452.scope - libcontainer container 7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452. Jun 25 18:26:41.978591 containerd[1436]: time="2024-06-25T18:26:41.978553776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zw95c,Uid:f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452\"" Jun 25 18:26:41.979259 kubelet[2511]: E0625 18:26:41.979074 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:42.141225 update_engine[1418]: I0625 18:26:42.140909 1418 update_attempter.cc:509] Updating boot flags... Jun 25 18:26:42.174992 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2878) Jun 25 18:26:42.209133 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2881) Jun 25 18:26:42.257810 kubelet[2511]: E0625 18:26:42.257782 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:42.361918 kubelet[2511]: E0625 18:26:42.361656 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:50.165157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321923890.mount: Deactivated successfully. Jun 25 18:26:51.409725 containerd[1436]: time="2024-06-25T18:26:51.409666896Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:51.410613 containerd[1436]: time="2024-06-25T18:26:51.410574134Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651522" Jun 25 18:26:51.411722 containerd[1436]: time="2024-06-25T18:26:51.411695270Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:51.413568 containerd[1436]: time="2024-06-25T18:26:51.413503225Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.681046932s" Jun 25 18:26:51.413568 containerd[1436]: time="2024-06-25T18:26:51.413547829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 25 18:26:51.423727 containerd[1436]: time="2024-06-25T18:26:51.423623815Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 18:26:51.427224 containerd[1436]: time="2024-06-25T18:26:51.427082072Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:26:51.442165 containerd[1436]: time="2024-06-25T18:26:51.442062198Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\"" Jun 25 18:26:51.443938 containerd[1436]: time="2024-06-25T18:26:51.442582323Z" level=info msg="StartContainer for \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\"" Jun 25 18:26:51.465776 systemd[1]: run-containerd-runc-k8s.io-3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258-runc.YzNpET.mount: Deactivated successfully. Jun 25 18:26:51.475041 systemd[1]: Started cri-containerd-3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258.scope - libcontainer container 3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258. Jun 25 18:26:51.498736 containerd[1436]: time="2024-06-25T18:26:51.498689662Z" level=info msg="StartContainer for \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\" returns successfully" Jun 25 18:26:51.550845 systemd[1]: cri-containerd-3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258.scope: Deactivated successfully. Jun 25 18:26:51.764254 containerd[1436]: time="2024-06-25T18:26:51.763792350Z" level=info msg="shim disconnected" id=3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258 namespace=k8s.io Jun 25 18:26:51.764254 containerd[1436]: time="2024-06-25T18:26:51.763851115Z" level=warning msg="cleaning up after shim disconnected" id=3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258 namespace=k8s.io Jun 25 18:26:51.764254 containerd[1436]: time="2024-06-25T18:26:51.763864276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:26:52.384502 kubelet[2511]: E0625 18:26:52.384450 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:52.387283 containerd[1436]: time="2024-06-25T18:26:52.387241542Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:26:52.411611 kubelet[2511]: I0625 18:26:52.411534 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5pl2f" podStartSLOduration=12.411496537 podStartE2EDuration="12.411496537s" podCreationTimestamp="2024-06-25 18:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:26:42.370202006 +0000 UTC m=+15.139283091" watchObservedRunningTime="2024-06-25 18:26:52.411496537 +0000 UTC m=+25.180577582" Jun 25 18:26:52.411772 containerd[1436]: time="2024-06-25T18:26:52.411534340Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\"" Jun 25 18:26:52.412356 containerd[1436]: time="2024-06-25T18:26:52.412328726Z" level=info msg="StartContainer for \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\"" Jun 25 18:26:52.437024 systemd[1]: Started cri-containerd-9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e.scope - libcontainer container 9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e. Jun 25 18:26:52.439323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258-rootfs.mount: Deactivated successfully. Jun 25 18:26:52.463546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368401967.mount: Deactivated successfully. Jun 25 18:26:52.467344 containerd[1436]: time="2024-06-25T18:26:52.467299768Z" level=info msg="StartContainer for \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\" returns successfully" Jun 25 18:26:52.497858 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:26:52.498195 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:26:52.498264 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:26:52.507342 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:26:52.507536 systemd[1]: cri-containerd-9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e.scope: Deactivated successfully. Jun 25 18:26:52.525524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e-rootfs.mount: Deactivated successfully. Jun 25 18:26:52.535352 containerd[1436]: time="2024-06-25T18:26:52.535140949Z" level=info msg="shim disconnected" id=9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e namespace=k8s.io Jun 25 18:26:52.535352 containerd[1436]: time="2024-06-25T18:26:52.535199634Z" level=warning msg="cleaning up after shim disconnected" id=9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e namespace=k8s.io Jun 25 18:26:52.535352 containerd[1436]: time="2024-06-25T18:26:52.535208875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:26:52.542624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:26:52.761037 containerd[1436]: time="2024-06-25T18:26:52.760932125Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:52.761431 containerd[1436]: time="2024-06-25T18:26:52.761395243Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138354" Jun 25 18:26:52.762361 containerd[1436]: time="2024-06-25T18:26:52.762307278Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:26:52.763801 containerd[1436]: time="2024-06-25T18:26:52.763766758Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.34010202s" Jun 25 18:26:52.763902 containerd[1436]: time="2024-06-25T18:26:52.763803521Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 25 18:26:52.767332 containerd[1436]: time="2024-06-25T18:26:52.767287048Z" level=info msg="CreateContainer within sandbox \"7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 18:26:52.780733 containerd[1436]: time="2024-06-25T18:26:52.780688070Z" level=info msg="CreateContainer within sandbox \"7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\"" Jun 25 18:26:52.782540 containerd[1436]: time="2024-06-25T18:26:52.782502259Z" level=info msg="StartContainer for \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\"" Jun 25 18:26:52.811038 systemd[1]: Started cri-containerd-71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e.scope - libcontainer container 71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e. Jun 25 18:26:52.835972 containerd[1436]: time="2024-06-25T18:26:52.835932255Z" level=info msg="StartContainer for \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\" returns successfully" Jun 25 18:26:53.386833 kubelet[2511]: E0625 18:26:53.386796 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:53.389079 kubelet[2511]: E0625 18:26:53.389017 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:53.399002 containerd[1436]: time="2024-06-25T18:26:53.398958586Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:26:53.441900 kubelet[2511]: I0625 18:26:53.441848 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-zw95c" podStartSLOduration=1.657492205 podStartE2EDuration="12.441808606s" podCreationTimestamp="2024-06-25 18:26:41 +0000 UTC" firstStartedPulling="2024-06-25 18:26:41.979749622 +0000 UTC m=+14.748830667" lastFinishedPulling="2024-06-25 18:26:52.764066063 +0000 UTC m=+25.533147068" observedRunningTime="2024-06-25 18:26:53.416139141 +0000 UTC m=+26.185220186" watchObservedRunningTime="2024-06-25 18:26:53.441808606 +0000 UTC m=+26.210889651" Jun 25 18:26:53.451434 containerd[1436]: time="2024-06-25T18:26:53.451384321Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\"" Jun 25 18:26:53.452816 containerd[1436]: time="2024-06-25T18:26:53.451928044Z" level=info msg="StartContainer for \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\"" Jun 25 18:26:53.493077 systemd[1]: Started cri-containerd-d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc.scope - libcontainer container d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc. Jun 25 18:26:53.516268 containerd[1436]: time="2024-06-25T18:26:53.516163271Z" level=info msg="StartContainer for \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\" returns successfully" Jun 25 18:26:53.548813 systemd[1]: cri-containerd-d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc.scope: Deactivated successfully. Jun 25 18:26:53.591390 containerd[1436]: time="2024-06-25T18:26:53.591314079Z" level=info msg="shim disconnected" id=d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc namespace=k8s.io Jun 25 18:26:53.591390 containerd[1436]: time="2024-06-25T18:26:53.591370763Z" level=warning msg="cleaning up after shim disconnected" id=d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc namespace=k8s.io Jun 25 18:26:53.591390 containerd[1436]: time="2024-06-25T18:26:53.591383204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:26:54.392592 kubelet[2511]: E0625 18:26:54.392564 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:54.392989 kubelet[2511]: E0625 18:26:54.392739 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:54.395668 containerd[1436]: time="2024-06-25T18:26:54.395559710Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:26:54.409205 containerd[1436]: time="2024-06-25T18:26:54.409160139Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\"" Jun 25 18:26:54.409856 containerd[1436]: time="2024-06-25T18:26:54.409831030Z" level=info msg="StartContainer for \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\"" Jun 25 18:26:54.436293 systemd[1]: Started cri-containerd-8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead.scope - libcontainer container 8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead. Jun 25 18:26:54.439785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc-rootfs.mount: Deactivated successfully. Jun 25 18:26:54.455590 systemd[1]: cri-containerd-8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead.scope: Deactivated successfully. Jun 25 18:26:54.458781 containerd[1436]: time="2024-06-25T18:26:54.458409188Z" level=info msg="StartContainer for \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\" returns successfully" Jun 25 18:26:54.464996 containerd[1436]: time="2024-06-25T18:26:54.464785350Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4f3bdf3_2bbf_4818_9bb3_cec0b3d70b3c.slice/cri-containerd-8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead.scope/memory.events\": no such file or directory" Jun 25 18:26:54.474931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead-rootfs.mount: Deactivated successfully. Jun 25 18:26:54.479600 containerd[1436]: time="2024-06-25T18:26:54.479433059Z" level=info msg="shim disconnected" id=8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead namespace=k8s.io Jun 25 18:26:54.479600 containerd[1436]: time="2024-06-25T18:26:54.479500984Z" level=warning msg="cleaning up after shim disconnected" id=8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead namespace=k8s.io Jun 25 18:26:54.479600 containerd[1436]: time="2024-06-25T18:26:54.479509705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:26:55.396479 kubelet[2511]: E0625 18:26:55.396454 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:55.399265 containerd[1436]: time="2024-06-25T18:26:55.399048697Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:26:55.414555 containerd[1436]: time="2024-06-25T18:26:55.414502741Z" level=info msg="CreateContainer within sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\"" Jun 25 18:26:55.414980 containerd[1436]: time="2024-06-25T18:26:55.414958254Z" level=info msg="StartContainer for \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\"" Jun 25 18:26:55.439033 systemd[1]: Started cri-containerd-6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308.scope - libcontainer container 6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308. Jun 25 18:26:55.467712 containerd[1436]: time="2024-06-25T18:26:55.467646886Z" level=info msg="StartContainer for \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\" returns successfully" Jun 25 18:26:55.544314 kubelet[2511]: I0625 18:26:55.544090 2511 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:26:55.570447 kubelet[2511]: I0625 18:26:55.570239 2511 topology_manager.go:215] "Topology Admit Handler" podUID="f5390a91-2e0c-4038-bf47-fa0d39769e25" podNamespace="kube-system" podName="coredns-76f75df574-8pg8f" Jun 25 18:26:55.570827 kubelet[2511]: I0625 18:26:55.570681 2511 topology_manager.go:215] "Topology Admit Handler" podUID="793bd773-e377-4c48-8d38-16f02fd29f09" podNamespace="kube-system" podName="coredns-76f75df574-blvsk" Jun 25 18:26:55.581735 systemd[1]: Created slice kubepods-burstable-pod793bd773_e377_4c48_8d38_16f02fd29f09.slice - libcontainer container kubepods-burstable-pod793bd773_e377_4c48_8d38_16f02fd29f09.slice. Jun 25 18:26:55.588712 systemd[1]: Created slice kubepods-burstable-podf5390a91_2e0c_4038_bf47_fa0d39769e25.slice - libcontainer container kubepods-burstable-podf5390a91_2e0c_4038_bf47_fa0d39769e25.slice. Jun 25 18:26:55.693481 kubelet[2511]: I0625 18:26:55.692051 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q829\" (UniqueName: \"kubernetes.io/projected/793bd773-e377-4c48-8d38-16f02fd29f09-kube-api-access-5q829\") pod \"coredns-76f75df574-blvsk\" (UID: \"793bd773-e377-4c48-8d38-16f02fd29f09\") " pod="kube-system/coredns-76f75df574-blvsk" Jun 25 18:26:55.693481 kubelet[2511]: I0625 18:26:55.692111 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jm6d\" (UniqueName: \"kubernetes.io/projected/f5390a91-2e0c-4038-bf47-fa0d39769e25-kube-api-access-2jm6d\") pod \"coredns-76f75df574-8pg8f\" (UID: \"f5390a91-2e0c-4038-bf47-fa0d39769e25\") " pod="kube-system/coredns-76f75df574-8pg8f" Jun 25 18:26:55.693481 kubelet[2511]: I0625 18:26:55.692133 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5390a91-2e0c-4038-bf47-fa0d39769e25-config-volume\") pod \"coredns-76f75df574-8pg8f\" (UID: \"f5390a91-2e0c-4038-bf47-fa0d39769e25\") " pod="kube-system/coredns-76f75df574-8pg8f" Jun 25 18:26:55.693481 kubelet[2511]: I0625 18:26:55.692158 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/793bd773-e377-4c48-8d38-16f02fd29f09-config-volume\") pod \"coredns-76f75df574-blvsk\" (UID: \"793bd773-e377-4c48-8d38-16f02fd29f09\") " pod="kube-system/coredns-76f75df574-blvsk" Jun 25 18:26:55.886826 kubelet[2511]: E0625 18:26:55.886634 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:55.887669 containerd[1436]: time="2024-06-25T18:26:55.887623748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-blvsk,Uid:793bd773-e377-4c48-8d38-16f02fd29f09,Namespace:kube-system,Attempt:0,}" Jun 25 18:26:55.893498 kubelet[2511]: E0625 18:26:55.893423 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:55.893801 containerd[1436]: time="2024-06-25T18:26:55.893769555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8pg8f,Uid:f5390a91-2e0c-4038-bf47-fa0d39769e25,Namespace:kube-system,Attempt:0,}" Jun 25 18:26:56.403371 kubelet[2511]: E0625 18:26:56.403344 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:56.419380 kubelet[2511]: I0625 18:26:56.419338 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vbd8p" podStartSLOduration=6.736332277 podStartE2EDuration="16.419295085s" podCreationTimestamp="2024-06-25 18:26:40 +0000 UTC" firstStartedPulling="2024-06-25 18:26:41.731052261 +0000 UTC m=+14.500133306" lastFinishedPulling="2024-06-25 18:26:51.414015069 +0000 UTC m=+24.183096114" observedRunningTime="2024-06-25 18:26:56.418724725 +0000 UTC m=+29.187805770" watchObservedRunningTime="2024-06-25 18:26:56.419295085 +0000 UTC m=+29.188376130" Jun 25 18:26:57.405977 kubelet[2511]: E0625 18:26:57.405940 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:57.653722 systemd-networkd[1376]: cilium_host: Link UP Jun 25 18:26:57.653832 systemd-networkd[1376]: cilium_net: Link UP Jun 25 18:26:57.653835 systemd-networkd[1376]: cilium_net: Gained carrier Jun 25 18:26:57.653996 systemd-networkd[1376]: cilium_host: Gained carrier Jun 25 18:26:57.654139 systemd-networkd[1376]: cilium_host: Gained IPv6LL Jun 25 18:26:57.744822 systemd-networkd[1376]: cilium_vxlan: Link UP Jun 25 18:26:57.744835 systemd-networkd[1376]: cilium_vxlan: Gained carrier Jun 25 18:26:58.068055 kernel: NET: Registered PF_ALG protocol family Jun 25 18:26:58.268745 systemd-networkd[1376]: cilium_net: Gained IPv6LL Jun 25 18:26:58.407217 kubelet[2511]: E0625 18:26:58.407115 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:26:58.486674 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:57716.service - OpenSSH per-connection server daemon (10.0.0.1:57716). Jun 25 18:26:58.526359 sshd[3595]: Accepted publickey for core from 10.0.0.1 port 57716 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:26:58.527936 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:26:58.532153 systemd-logind[1415]: New session 8 of user core. Jun 25 18:26:58.537026 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:26:58.643982 systemd-networkd[1376]: lxc_health: Link UP Jun 25 18:26:58.651457 systemd-networkd[1376]: lxc_health: Gained carrier Jun 25 18:26:58.682079 sshd[3595]: pam_unix(sshd:session): session closed for user core Jun 25 18:26:58.685651 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:57716.service: Deactivated successfully. Jun 25 18:26:58.687295 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:26:58.687831 systemd-logind[1415]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:26:58.688756 systemd-logind[1415]: Removed session 8. Jun 25 18:26:58.993209 systemd-networkd[1376]: lxc1c20f73c5b96: Link UP Jun 25 18:26:59.000934 kernel: eth0: renamed from tmp8bc38 Jun 25 18:26:59.006003 systemd-networkd[1376]: lxcf14897d48ab5: Link UP Jun 25 18:26:59.013907 kernel: eth0: renamed from tmpa65d1 Jun 25 18:26:59.022008 systemd-networkd[1376]: lxc1c20f73c5b96: Gained carrier Jun 25 18:26:59.022529 systemd-networkd[1376]: lxcf14897d48ab5: Gained carrier Jun 25 18:26:59.175337 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Jun 25 18:26:59.669595 kubelet[2511]: E0625 18:26:59.669566 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:00.188110 systemd-networkd[1376]: lxc1c20f73c5b96: Gained IPv6LL Jun 25 18:27:00.410441 kubelet[2511]: E0625 18:27:00.410401 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:00.507059 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jun 25 18:27:00.763176 systemd-networkd[1376]: lxcf14897d48ab5: Gained IPv6LL Jun 25 18:27:02.619488 containerd[1436]: time="2024-06-25T18:27:02.619285402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:27:02.619488 containerd[1436]: time="2024-06-25T18:27:02.619346725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:27:02.619488 containerd[1436]: time="2024-06-25T18:27:02.619361406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:27:02.619488 containerd[1436]: time="2024-06-25T18:27:02.619381567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:27:02.624011 containerd[1436]: time="2024-06-25T18:27:02.620695722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:27:02.624011 containerd[1436]: time="2024-06-25T18:27:02.620746564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:27:02.624011 containerd[1436]: time="2024-06-25T18:27:02.620765366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:27:02.624011 containerd[1436]: time="2024-06-25T18:27:02.620778446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:27:02.651073 systemd[1]: Started cri-containerd-8bc3886d3036da2a856539af57d969a934aea7491095dfb01d7173b0b32e35f9.scope - libcontainer container 8bc3886d3036da2a856539af57d969a934aea7491095dfb01d7173b0b32e35f9. Jun 25 18:27:02.652345 systemd[1]: Started cri-containerd-a65d1894dd91dc07750c2c081312ca0d5193957e1d2a953d4a26d710c4a46104.scope - libcontainer container a65d1894dd91dc07750c2c081312ca0d5193957e1d2a953d4a26d710c4a46104. Jun 25 18:27:02.663377 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:27:02.668995 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:27:02.686689 containerd[1436]: time="2024-06-25T18:27:02.686650887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8pg8f,Uid:f5390a91-2e0c-4038-bf47-fa0d39769e25,Namespace:kube-system,Attempt:0,} returns sandbox id \"a65d1894dd91dc07750c2c081312ca0d5193957e1d2a953d4a26d710c4a46104\"" Jun 25 18:27:02.688307 containerd[1436]: time="2024-06-25T18:27:02.688280979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-blvsk,Uid:793bd773-e377-4c48-8d38-16f02fd29f09,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bc3886d3036da2a856539af57d969a934aea7491095dfb01d7173b0b32e35f9\"" Jun 25 18:27:02.688468 kubelet[2511]: E0625 18:27:02.688446 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:02.689853 kubelet[2511]: E0625 18:27:02.689784 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:02.690921 containerd[1436]: time="2024-06-25T18:27:02.690867365Z" level=info msg="CreateContainer within sandbox \"a65d1894dd91dc07750c2c081312ca0d5193957e1d2a953d4a26d710c4a46104\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:27:02.692039 containerd[1436]: time="2024-06-25T18:27:02.692008790Z" level=info msg="CreateContainer within sandbox \"8bc3886d3036da2a856539af57d969a934aea7491095dfb01d7173b0b32e35f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:27:02.706707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2098248763.mount: Deactivated successfully. Jun 25 18:27:02.709157 containerd[1436]: time="2024-06-25T18:27:02.709123796Z" level=info msg="CreateContainer within sandbox \"a65d1894dd91dc07750c2c081312ca0d5193957e1d2a953d4a26d710c4a46104\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0776b3f92f7ae6328fd20c844ee3b86850d5850dd6eddba55a0b9fb76a5a787e\"" Jun 25 18:27:02.709860 containerd[1436]: time="2024-06-25T18:27:02.709821156Z" level=info msg="StartContainer for \"0776b3f92f7ae6328fd20c844ee3b86850d5850dd6eddba55a0b9fb76a5a787e\"" Jun 25 18:27:02.715131 containerd[1436]: time="2024-06-25T18:27:02.715086173Z" level=info msg="CreateContainer within sandbox \"8bc3886d3036da2a856539af57d969a934aea7491095dfb01d7173b0b32e35f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbcad0d09a1202e0d7bb3e278a348e69bd9258c36c52e4f7aa3e5314982d79b7\"" Jun 25 18:27:02.715792 containerd[1436]: time="2024-06-25T18:27:02.715767291Z" level=info msg="StartContainer for \"cbcad0d09a1202e0d7bb3e278a348e69bd9258c36c52e4f7aa3e5314982d79b7\"" Jun 25 18:27:02.739051 systemd[1]: Started cri-containerd-cbcad0d09a1202e0d7bb3e278a348e69bd9258c36c52e4f7aa3e5314982d79b7.scope - libcontainer container cbcad0d09a1202e0d7bb3e278a348e69bd9258c36c52e4f7aa3e5314982d79b7. Jun 25 18:27:02.741541 systemd[1]: Started cri-containerd-0776b3f92f7ae6328fd20c844ee3b86850d5850dd6eddba55a0b9fb76a5a787e.scope - libcontainer container 0776b3f92f7ae6328fd20c844ee3b86850d5850dd6eddba55a0b9fb76a5a787e. Jun 25 18:27:02.773377 containerd[1436]: time="2024-06-25T18:27:02.773326023Z" level=info msg="StartContainer for \"0776b3f92f7ae6328fd20c844ee3b86850d5850dd6eddba55a0b9fb76a5a787e\" returns successfully" Jun 25 18:27:02.773500 containerd[1436]: time="2024-06-25T18:27:02.773403387Z" level=info msg="StartContainer for \"cbcad0d09a1202e0d7bb3e278a348e69bd9258c36c52e4f7aa3e5314982d79b7\" returns successfully" Jun 25 18:27:03.422532 kubelet[2511]: E0625 18:27:03.422466 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:03.425312 kubelet[2511]: E0625 18:27:03.425290 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:03.459368 kubelet[2511]: I0625 18:27:03.459311 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8pg8f" podStartSLOduration=22.459277118 podStartE2EDuration="22.459277118s" podCreationTimestamp="2024-06-25 18:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:27:03.458999423 +0000 UTC m=+36.228080468" watchObservedRunningTime="2024-06-25 18:27:03.459277118 +0000 UTC m=+36.228358163" Jun 25 18:27:03.499805 kubelet[2511]: I0625 18:27:03.499753 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-blvsk" podStartSLOduration=22.499714091 podStartE2EDuration="22.499714091s" podCreationTimestamp="2024-06-25 18:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:27:03.498641312 +0000 UTC m=+36.267722357" watchObservedRunningTime="2024-06-25 18:27:03.499714091 +0000 UTC m=+36.268795096" Jun 25 18:27:03.691706 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:56872.service - OpenSSH per-connection server daemon (10.0.0.1:56872). Jun 25 18:27:03.731734 sshd[3930]: Accepted publickey for core from 10.0.0.1 port 56872 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:03.733498 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:03.737374 systemd-logind[1415]: New session 9 of user core. Jun 25 18:27:03.752111 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:27:03.875422 sshd[3930]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:03.878507 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:56872.service: Deactivated successfully. Jun 25 18:27:03.880250 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:27:03.881184 systemd-logind[1415]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:27:03.881982 systemd-logind[1415]: Removed session 9. Jun 25 18:27:04.427784 kubelet[2511]: E0625 18:27:04.427701 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:04.428261 kubelet[2511]: E0625 18:27:04.427940 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:05.429127 kubelet[2511]: E0625 18:27:05.429078 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:05.429483 kubelet[2511]: E0625 18:27:05.429377 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:08.889358 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:56886.service - OpenSSH per-connection server daemon (10.0.0.1:56886). Jun 25 18:27:08.925162 sshd[3952]: Accepted publickey for core from 10.0.0.1 port 56886 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:08.926293 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:08.929678 systemd-logind[1415]: New session 10 of user core. Jun 25 18:27:08.941040 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:27:09.050815 sshd[3952]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:09.065323 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:56886.service: Deactivated successfully. Jun 25 18:27:09.067047 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:27:09.068377 systemd-logind[1415]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:27:09.076134 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:49846.service - OpenSSH per-connection server daemon (10.0.0.1:49846). Jun 25 18:27:09.077335 systemd-logind[1415]: Removed session 10. Jun 25 18:27:09.108765 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 49846 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:09.109966 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:09.113379 systemd-logind[1415]: New session 11 of user core. Jun 25 18:27:09.121108 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:27:09.268581 sshd[3967]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:09.275436 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:49846.service: Deactivated successfully. Jun 25 18:27:09.278715 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:27:09.281622 systemd-logind[1415]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:27:09.289378 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:49848.service - OpenSSH per-connection server daemon (10.0.0.1:49848). Jun 25 18:27:09.291448 systemd-logind[1415]: Removed session 11. Jun 25 18:27:09.325239 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 49848 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:09.326440 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:09.331000 systemd-logind[1415]: New session 12 of user core. Jun 25 18:27:09.341086 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:27:09.454171 sshd[3979]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:09.457920 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:49848.service: Deactivated successfully. Jun 25 18:27:09.460345 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:27:09.461431 systemd-logind[1415]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:27:09.462678 systemd-logind[1415]: Removed session 12. Jun 25 18:27:14.465559 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:49864.service - OpenSSH per-connection server daemon (10.0.0.1:49864). Jun 25 18:27:14.505542 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 49864 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:14.506914 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:14.510529 systemd-logind[1415]: New session 13 of user core. Jun 25 18:27:14.522038 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:27:14.638686 sshd[3997]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:14.641876 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:49864.service: Deactivated successfully. Jun 25 18:27:14.643528 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:27:14.644528 systemd-logind[1415]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:27:14.645319 systemd-logind[1415]: Removed session 13. Jun 25 18:27:19.649480 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:38840.service - OpenSSH per-connection server daemon (10.0.0.1:38840). Jun 25 18:27:19.684601 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 38840 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:19.685805 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:19.689411 systemd-logind[1415]: New session 14 of user core. Jun 25 18:27:19.698048 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:27:19.806350 sshd[4012]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:19.818241 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:38840.service: Deactivated successfully. Jun 25 18:27:19.819606 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:27:19.821601 systemd-logind[1415]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:27:19.822470 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:38854.service - OpenSSH per-connection server daemon (10.0.0.1:38854). Jun 25 18:27:19.823635 systemd-logind[1415]: Removed session 14. Jun 25 18:27:19.858049 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 38854 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:19.859302 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:19.863071 systemd-logind[1415]: New session 15 of user core. Jun 25 18:27:19.869033 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:27:20.065584 sshd[4026]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:20.079493 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:38854.service: Deactivated successfully. Jun 25 18:27:20.081374 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:27:20.084062 systemd-logind[1415]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:27:20.085527 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:38862.service - OpenSSH per-connection server daemon (10.0.0.1:38862). Jun 25 18:27:20.086651 systemd-logind[1415]: Removed session 15. Jun 25 18:27:20.124262 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 38862 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:20.125527 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:20.129809 systemd-logind[1415]: New session 16 of user core. Jun 25 18:27:20.146057 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:27:21.363819 sshd[4038]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:21.373726 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:38862.service: Deactivated successfully. Jun 25 18:27:21.376471 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:27:21.378614 systemd-logind[1415]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:27:21.386217 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:38876.service - OpenSSH per-connection server daemon (10.0.0.1:38876). Jun 25 18:27:21.388138 systemd-logind[1415]: Removed session 16. Jun 25 18:27:21.418039 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 38876 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:21.419302 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:21.422749 systemd-logind[1415]: New session 17 of user core. Jun 25 18:27:21.437106 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:27:21.672445 sshd[4059]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:21.679332 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:38876.service: Deactivated successfully. Jun 25 18:27:21.682608 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:27:21.683775 systemd-logind[1415]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:27:21.690138 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:38890.service - OpenSSH per-connection server daemon (10.0.0.1:38890). Jun 25 18:27:21.691022 systemd-logind[1415]: Removed session 17. Jun 25 18:27:21.722296 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 38890 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:21.723479 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:21.727073 systemd-logind[1415]: New session 18 of user core. Jun 25 18:27:21.736049 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:27:21.844702 sshd[4072]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:21.847909 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:38890.service: Deactivated successfully. Jun 25 18:27:21.849520 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:27:21.850103 systemd-logind[1415]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:27:21.850814 systemd-logind[1415]: Removed session 18. Jun 25 18:27:26.855514 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:38894.service - OpenSSH per-connection server daemon (10.0.0.1:38894). Jun 25 18:27:26.891596 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 38894 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:26.892734 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:26.896451 systemd-logind[1415]: New session 19 of user core. Jun 25 18:27:26.904037 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:27:27.013715 sshd[4089]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:27.016822 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:38894.service: Deactivated successfully. Jun 25 18:27:27.018505 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:27:27.020144 systemd-logind[1415]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:27:27.021144 systemd-logind[1415]: Removed session 19. Jun 25 18:27:32.024489 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:58786.service - OpenSSH per-connection server daemon (10.0.0.1:58786). Jun 25 18:27:32.060436 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 58786 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:32.061805 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:32.065489 systemd-logind[1415]: New session 20 of user core. Jun 25 18:27:32.075049 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:27:32.180636 sshd[4105]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:32.184000 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:58786.service: Deactivated successfully. Jun 25 18:27:32.187349 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:27:32.188053 systemd-logind[1415]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:27:32.188963 systemd-logind[1415]: Removed session 20. Jun 25 18:27:37.191666 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:58792.service - OpenSSH per-connection server daemon (10.0.0.1:58792). Jun 25 18:27:37.227001 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 58792 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:37.228278 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:37.231929 systemd-logind[1415]: New session 21 of user core. Jun 25 18:27:37.237017 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:27:37.346803 sshd[4119]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:37.353455 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:58792.service: Deactivated successfully. Jun 25 18:27:37.354924 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:27:37.356947 systemd-logind[1415]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:27:37.365145 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:58800.service - OpenSSH per-connection server daemon (10.0.0.1:58800). Jun 25 18:27:37.366142 systemd-logind[1415]: Removed session 21. Jun 25 18:27:37.396676 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 58800 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:37.397849 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:37.401532 systemd-logind[1415]: New session 22 of user core. Jun 25 18:27:37.409025 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:27:39.610478 containerd[1436]: time="2024-06-25T18:27:39.610427742Z" level=info msg="StopContainer for \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\" with timeout 30 (s)" Jun 25 18:27:39.611183 containerd[1436]: time="2024-06-25T18:27:39.611123261Z" level=info msg="Stop container \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\" with signal terminated" Jun 25 18:27:39.628082 systemd[1]: cri-containerd-71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e.scope: Deactivated successfully. Jun 25 18:27:39.633923 containerd[1436]: time="2024-06-25T18:27:39.633868825Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:27:39.641367 containerd[1436]: time="2024-06-25T18:27:39.641321174Z" level=info msg="StopContainer for \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\" with timeout 2 (s)" Jun 25 18:27:39.641622 containerd[1436]: time="2024-06-25T18:27:39.641594053Z" level=info msg="Stop container \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\" with signal terminated" Jun 25 18:27:39.648167 systemd-networkd[1376]: lxc_health: Link DOWN Jun 25 18:27:39.648173 systemd-networkd[1376]: lxc_health: Lost carrier Jun 25 18:27:39.650643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e-rootfs.mount: Deactivated successfully. Jun 25 18:27:39.663043 containerd[1436]: time="2024-06-25T18:27:39.662978860Z" level=info msg="shim disconnected" id=71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e namespace=k8s.io Jun 25 18:27:39.663043 containerd[1436]: time="2024-06-25T18:27:39.663034260Z" level=warning msg="cleaning up after shim disconnected" id=71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e namespace=k8s.io Jun 25 18:27:39.663043 containerd[1436]: time="2024-06-25T18:27:39.663048780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:27:39.668660 systemd[1]: cri-containerd-6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308.scope: Deactivated successfully. Jun 25 18:27:39.669011 systemd[1]: cri-containerd-6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308.scope: Consumed 6.410s CPU time. Jun 25 18:27:39.678445 containerd[1436]: time="2024-06-25T18:27:39.678317196Z" level=info msg="StopContainer for \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\" returns successfully" Jun 25 18:27:39.678933 containerd[1436]: time="2024-06-25T18:27:39.678908635Z" level=info msg="StopPodSandbox for \"7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452\"" Jun 25 18:27:39.678981 containerd[1436]: time="2024-06-25T18:27:39.678944835Z" level=info msg="Container to stop \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:27:39.681398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452-shm.mount: Deactivated successfully. Jun 25 18:27:39.687619 systemd[1]: cri-containerd-7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452.scope: Deactivated successfully. Jun 25 18:27:39.693302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308-rootfs.mount: Deactivated successfully. Jun 25 18:27:39.704398 containerd[1436]: time="2024-06-25T18:27:39.704331795Z" level=info msg="shim disconnected" id=6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308 namespace=k8s.io Jun 25 18:27:39.705266 containerd[1436]: time="2024-06-25T18:27:39.705152674Z" level=warning msg="cleaning up after shim disconnected" id=6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308 namespace=k8s.io Jun 25 18:27:39.705266 containerd[1436]: time="2024-06-25T18:27:39.705178234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:27:39.707280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452-rootfs.mount: Deactivated successfully. Jun 25 18:27:39.708936 containerd[1436]: time="2024-06-25T18:27:39.708890508Z" level=info msg="shim disconnected" id=7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452 namespace=k8s.io Jun 25 18:27:39.708936 containerd[1436]: time="2024-06-25T18:27:39.708934188Z" level=warning msg="cleaning up after shim disconnected" id=7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452 namespace=k8s.io Jun 25 18:27:39.709134 containerd[1436]: time="2024-06-25T18:27:39.708943548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:27:39.719971 containerd[1436]: time="2024-06-25T18:27:39.719929131Z" level=info msg="TearDown network for sandbox \"7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452\" successfully" Jun 25 18:27:39.719971 containerd[1436]: time="2024-06-25T18:27:39.719963891Z" level=info msg="StopPodSandbox for \"7d00b49bd6aba3d17a9ee3410645cc8d3edced4b915e72aa46a60dd683a6e452\" returns successfully" Jun 25 18:27:39.720635 containerd[1436]: time="2024-06-25T18:27:39.720607370Z" level=info msg="StopContainer for \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\" returns successfully" Jun 25 18:27:39.721062 containerd[1436]: time="2024-06-25T18:27:39.720957849Z" level=info msg="StopPodSandbox for \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\"" Jun 25 18:27:39.721062 containerd[1436]: time="2024-06-25T18:27:39.720997529Z" level=info msg="Container to stop \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:27:39.721062 containerd[1436]: time="2024-06-25T18:27:39.721040329Z" level=info msg="Container to stop \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:27:39.721062 containerd[1436]: time="2024-06-25T18:27:39.721049929Z" level=info msg="Container to stop \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:27:39.721062 containerd[1436]: time="2024-06-25T18:27:39.721059449Z" level=info msg="Container to stop \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:27:39.721428 containerd[1436]: time="2024-06-25T18:27:39.721068689Z" level=info msg="Container to stop \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:27:39.727929 systemd[1]: cri-containerd-15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891.scope: Deactivated successfully. Jun 25 18:27:39.752241 containerd[1436]: time="2024-06-25T18:27:39.752087441Z" level=info msg="shim disconnected" id=15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891 namespace=k8s.io Jun 25 18:27:39.752241 containerd[1436]: time="2024-06-25T18:27:39.752147801Z" level=warning msg="cleaning up after shim disconnected" id=15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891 namespace=k8s.io Jun 25 18:27:39.752241 containerd[1436]: time="2024-06-25T18:27:39.752247200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:27:39.768071 containerd[1436]: time="2024-06-25T18:27:39.766552258Z" level=info msg="TearDown network for sandbox \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" successfully" Jun 25 18:27:39.768071 containerd[1436]: time="2024-06-25T18:27:39.766588778Z" level=info msg="StopPodSandbox for \"15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891\" returns successfully" Jun 25 18:27:39.840353 kubelet[2511]: I0625 18:27:39.839999 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1-cilium-config-path\") pod \"f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1\" (UID: \"f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1\") " Jun 25 18:27:39.840353 kubelet[2511]: I0625 18:27:39.840066 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgn2b\" (UniqueName: \"kubernetes.io/projected/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1-kube-api-access-lgn2b\") pod \"f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1\" (UID: \"f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1\") " Jun 25 18:27:39.841958 kubelet[2511]: I0625 18:27:39.841850 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1" (UID: "f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:27:39.843841 kubelet[2511]: I0625 18:27:39.843794 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1-kube-api-access-lgn2b" (OuterVolumeSpecName: "kube-api-access-lgn2b") pod "f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1" (UID: "f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1"). InnerVolumeSpecName "kube-api-access-lgn2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:27:39.942992 kubelet[2511]: I0625 18:27:39.941098 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cni-path\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.942992 kubelet[2511]: I0625 18:27:39.941153 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krj6p\" (UniqueName: \"kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-kube-api-access-krj6p\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.942992 kubelet[2511]: I0625 18:27:39.941175 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-run\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.942992 kubelet[2511]: I0625 18:27:39.941192 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-host-proc-sys-net\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.942992 kubelet[2511]: I0625 18:27:39.941210 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-etc-cni-netd\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.942992 kubelet[2511]: I0625 18:27:39.941230 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-config-path\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943220 kubelet[2511]: I0625 18:27:39.941246 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-bpf-maps\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943220 kubelet[2511]: I0625 18:27:39.941262 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-hostproc\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943220 kubelet[2511]: I0625 18:27:39.941279 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-hubble-tls\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943220 kubelet[2511]: I0625 18:27:39.941297 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-lib-modules\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943220 kubelet[2511]: I0625 18:27:39.941314 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-cgroup\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943220 kubelet[2511]: I0625 18:27:39.941344 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-clustermesh-secrets\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943385 kubelet[2511]: I0625 18:27:39.941364 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-host-proc-sys-kernel\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943385 kubelet[2511]: I0625 18:27:39.941384 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-xtables-lock\") pod \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\" (UID: \"b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c\") " Jun 25 18:27:39.943385 kubelet[2511]: I0625 18:27:39.941419 2511 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:39.943385 kubelet[2511]: I0625 18:27:39.941433 2511 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lgn2b\" (UniqueName: \"kubernetes.io/projected/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1-kube-api-access-lgn2b\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:39.943385 kubelet[2511]: I0625 18:27:39.941479 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.943385 kubelet[2511]: I0625 18:27:39.941509 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cni-path" (OuterVolumeSpecName: "cni-path") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.943535 kubelet[2511]: I0625 18:27:39.941782 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-hostproc" (OuterVolumeSpecName: "hostproc") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.943535 kubelet[2511]: I0625 18:27:39.941831 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.943535 kubelet[2511]: I0625 18:27:39.941847 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.943535 kubelet[2511]: I0625 18:27:39.941864 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.943809 kubelet[2511]: I0625 18:27:39.943756 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.943809 kubelet[2511]: I0625 18:27:39.943758 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.943968 kubelet[2511]: I0625 18:27:39.943933 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-kube-api-access-krj6p" (OuterVolumeSpecName: "kube-api-access-krj6p") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "kube-api-access-krj6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:27:39.944010 kubelet[2511]: I0625 18:27:39.943980 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.944010 kubelet[2511]: I0625 18:27:39.944001 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:27:39.944140 kubelet[2511]: I0625 18:27:39.944046 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:27:39.945965 kubelet[2511]: I0625 18:27:39.945930 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:27:39.946308 kubelet[2511]: I0625 18:27:39.946273 2511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" (UID: "b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:27:40.042140 kubelet[2511]: I0625 18:27:40.042076 2511 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042140 kubelet[2511]: I0625 18:27:40.042111 2511 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042140 kubelet[2511]: I0625 18:27:40.042122 2511 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042140 kubelet[2511]: I0625 18:27:40.042134 2511 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042140 kubelet[2511]: I0625 18:27:40.042146 2511 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042140 kubelet[2511]: I0625 18:27:40.042157 2511 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042400 kubelet[2511]: I0625 18:27:40.042168 2511 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042400 kubelet[2511]: I0625 18:27:40.042179 2511 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-krj6p\" (UniqueName: \"kubernetes.io/projected/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-kube-api-access-krj6p\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042400 kubelet[2511]: I0625 18:27:40.042188 2511 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042400 kubelet[2511]: I0625 18:27:40.042197 2511 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042400 kubelet[2511]: I0625 18:27:40.042209 2511 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042400 kubelet[2511]: I0625 18:27:40.042218 2511 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042400 kubelet[2511]: I0625 18:27:40.042227 2511 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.042400 kubelet[2511]: I0625 18:27:40.042236 2511 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 25 18:27:40.320495 kubelet[2511]: E0625 18:27:40.320465 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:40.499794 kubelet[2511]: I0625 18:27:40.499439 2511 scope.go:117] "RemoveContainer" containerID="71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e" Jun 25 18:27:40.500792 containerd[1436]: time="2024-06-25T18:27:40.500715873Z" level=info msg="RemoveContainer for \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\"" Jun 25 18:27:40.505429 systemd[1]: Removed slice kubepods-besteffort-podf14a62a6_fd3c_4f8c_bf44_8ee7fbce0ba1.slice - libcontainer container kubepods-besteffort-podf14a62a6_fd3c_4f8c_bf44_8ee7fbce0ba1.slice. Jun 25 18:27:40.508720 systemd[1]: Removed slice kubepods-burstable-podb4f3bdf3_2bbf_4818_9bb3_cec0b3d70b3c.slice - libcontainer container kubepods-burstable-podb4f3bdf3_2bbf_4818_9bb3_cec0b3d70b3c.slice. Jun 25 18:27:40.508796 systemd[1]: kubepods-burstable-podb4f3bdf3_2bbf_4818_9bb3_cec0b3d70b3c.slice: Consumed 6.577s CPU time. Jun 25 18:27:40.511466 containerd[1436]: time="2024-06-25T18:27:40.511429585Z" level=info msg="RemoveContainer for \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\" returns successfully" Jun 25 18:27:40.511843 kubelet[2511]: I0625 18:27:40.511821 2511 scope.go:117] "RemoveContainer" containerID="71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e" Jun 25 18:27:40.515935 containerd[1436]: time="2024-06-25T18:27:40.512199304Z" level=error msg="ContainerStatus for \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\": not found" Jun 25 18:27:40.534246 kubelet[2511]: E0625 18:27:40.534211 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\": not found" containerID="71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e" Jun 25 18:27:40.534341 kubelet[2511]: I0625 18:27:40.534320 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e"} err="failed to get container status \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\": rpc error: code = NotFound desc = an error occurred when try to find container \"71a7052c621d1b4cab9e59dba2e52514c619da5c1f1f29152103271c4004b88e\": not found" Jun 25 18:27:40.534404 kubelet[2511]: I0625 18:27:40.534346 2511 scope.go:117] "RemoveContainer" containerID="6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308" Jun 25 18:27:40.535468 containerd[1436]: time="2024-06-25T18:27:40.535430767Z" level=info msg="RemoveContainer for \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\"" Jun 25 18:27:40.537962 containerd[1436]: time="2024-06-25T18:27:40.537926805Z" level=info msg="RemoveContainer for \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\" returns successfully" Jun 25 18:27:40.538116 kubelet[2511]: I0625 18:27:40.538081 2511 scope.go:117] "RemoveContainer" containerID="8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead" Jun 25 18:27:40.539260 containerd[1436]: time="2024-06-25T18:27:40.539230884Z" level=info msg="RemoveContainer for \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\"" Jun 25 18:27:40.541403 containerd[1436]: time="2024-06-25T18:27:40.541367042Z" level=info msg="RemoveContainer for \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\" returns successfully" Jun 25 18:27:40.541546 kubelet[2511]: I0625 18:27:40.541516 2511 scope.go:117] "RemoveContainer" containerID="d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc" Jun 25 18:27:40.542484 containerd[1436]: time="2024-06-25T18:27:40.542452362Z" level=info msg="RemoveContainer for \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\"" Jun 25 18:27:40.544737 containerd[1436]: time="2024-06-25T18:27:40.544689880Z" level=info msg="RemoveContainer for \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\" returns successfully" Jun 25 18:27:40.544856 kubelet[2511]: I0625 18:27:40.544826 2511 scope.go:117] "RemoveContainer" containerID="9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e" Jun 25 18:27:40.545899 containerd[1436]: time="2024-06-25T18:27:40.545778519Z" level=info msg="RemoveContainer for \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\"" Jun 25 18:27:40.548123 containerd[1436]: time="2024-06-25T18:27:40.548090357Z" level=info msg="RemoveContainer for \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\" returns successfully" Jun 25 18:27:40.548252 kubelet[2511]: I0625 18:27:40.548234 2511 scope.go:117] "RemoveContainer" containerID="3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258" Jun 25 18:27:40.549295 containerd[1436]: time="2024-06-25T18:27:40.549052557Z" level=info msg="RemoveContainer for \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\"" Jun 25 18:27:40.551256 containerd[1436]: time="2024-06-25T18:27:40.551228315Z" level=info msg="RemoveContainer for \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\" returns successfully" Jun 25 18:27:40.551512 kubelet[2511]: I0625 18:27:40.551488 2511 scope.go:117] "RemoveContainer" containerID="6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308" Jun 25 18:27:40.551730 containerd[1436]: time="2024-06-25T18:27:40.551671035Z" level=error msg="ContainerStatus for \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\": not found" Jun 25 18:27:40.551800 kubelet[2511]: E0625 18:27:40.551779 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\": not found" containerID="6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308" Jun 25 18:27:40.551839 kubelet[2511]: I0625 18:27:40.551820 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308"} err="failed to get container status \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\": rpc error: code = NotFound desc = an error occurred when try to find container \"6639fd7ba33b6dc95b1887bd40f1ad430d726875de0c48c671347b6b29d22308\": not found" Jun 25 18:27:40.551839 kubelet[2511]: I0625 18:27:40.551832 2511 scope.go:117] "RemoveContainer" containerID="8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead" Jun 25 18:27:40.552021 containerd[1436]: time="2024-06-25T18:27:40.551987434Z" level=error msg="ContainerStatus for \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\": not found" Jun 25 18:27:40.552140 kubelet[2511]: E0625 18:27:40.552125 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\": not found" containerID="8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead" Jun 25 18:27:40.552175 kubelet[2511]: I0625 18:27:40.552160 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead"} err="failed to get container status \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d6939ed8635c04c5d8c26b2ac4d4006d990456ea9e9fe9a0fc0302af678bead\": not found" Jun 25 18:27:40.552175 kubelet[2511]: I0625 18:27:40.552171 2511 scope.go:117] "RemoveContainer" containerID="d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc" Jun 25 18:27:40.552358 containerd[1436]: time="2024-06-25T18:27:40.552320714Z" level=error msg="ContainerStatus for \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\": not found" Jun 25 18:27:40.552516 kubelet[2511]: E0625 18:27:40.552497 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\": not found" containerID="d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc" Jun 25 18:27:40.552566 kubelet[2511]: I0625 18:27:40.552530 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc"} err="failed to get container status \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1d8454302fe31c87fbdc68304b7fa1dda88d01cf229bd43ebec1fbb86e303bc\": not found" Jun 25 18:27:40.552566 kubelet[2511]: I0625 18:27:40.552543 2511 scope.go:117] "RemoveContainer" containerID="9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e" Jun 25 18:27:40.552763 containerd[1436]: time="2024-06-25T18:27:40.552705954Z" level=error msg="ContainerStatus for \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\": not found" Jun 25 18:27:40.552815 kubelet[2511]: E0625 18:27:40.552803 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\": not found" containerID="9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e" Jun 25 18:27:40.552862 kubelet[2511]: I0625 18:27:40.552848 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e"} err="failed to get container status \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d702066931379d839e73c02f7019d9adad5846744df510d155e6a7ab59a748e\": not found" Jun 25 18:27:40.552862 kubelet[2511]: I0625 18:27:40.552860 2511 scope.go:117] "RemoveContainer" containerID="3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258" Jun 25 18:27:40.553028 containerd[1436]: time="2024-06-25T18:27:40.552997554Z" level=error msg="ContainerStatus for \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\": not found" Jun 25 18:27:40.553168 kubelet[2511]: E0625 18:27:40.553150 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\": not found" containerID="3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258" Jun 25 18:27:40.553209 kubelet[2511]: I0625 18:27:40.553181 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258"} err="failed to get container status \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\": rpc error: code = NotFound desc = an error occurred when try to find container \"3216b2bbacb1ddf24333060c89a49604196c0b273d5c204d963faa106ad52258\": not found" Jun 25 18:27:40.618626 systemd[1]: var-lib-kubelet-pods-f14a62a6\x2dfd3c\x2d4f8c\x2dbf44\x2d8ee7fbce0ba1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlgn2b.mount: Deactivated successfully. Jun 25 18:27:40.618729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891-rootfs.mount: Deactivated successfully. Jun 25 18:27:40.618785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15db2f39f44cc36dcaaaaa8586e568c22fe5ee224396e27c0a062ac088d11891-shm.mount: Deactivated successfully. Jun 25 18:27:40.618844 systemd[1]: var-lib-kubelet-pods-b4f3bdf3\x2d2bbf\x2d4818\x2d9bb3\x2dcec0b3d70b3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkrj6p.mount: Deactivated successfully. Jun 25 18:27:40.618917 systemd[1]: var-lib-kubelet-pods-b4f3bdf3\x2d2bbf\x2d4818\x2d9bb3\x2dcec0b3d70b3c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 18:27:40.618969 systemd[1]: var-lib-kubelet-pods-b4f3bdf3\x2d2bbf\x2d4818\x2d9bb3\x2dcec0b3d70b3c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 18:27:41.322948 kubelet[2511]: I0625 18:27:41.322912 2511 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" path="/var/lib/kubelet/pods/b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c/volumes" Jun 25 18:27:41.323539 kubelet[2511]: I0625 18:27:41.323501 2511 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1" path="/var/lib/kubelet/pods/f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1/volumes" Jun 25 18:27:41.573612 sshd[4133]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:41.593810 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:58800.service: Deactivated successfully. Jun 25 18:27:41.596454 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:27:41.596669 systemd[1]: session-22.scope: Consumed 1.537s CPU time. Jun 25 18:27:41.597285 systemd-logind[1415]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:27:41.605180 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:53072.service - OpenSSH per-connection server daemon (10.0.0.1:53072). Jun 25 18:27:41.607095 systemd-logind[1415]: Removed session 22. Jun 25 18:27:41.651780 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 53072 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:41.653120 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:41.657501 systemd-logind[1415]: New session 23 of user core. Jun 25 18:27:41.669064 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:27:42.384044 kubelet[2511]: E0625 18:27:42.384015 2511 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:27:43.045361 sshd[4293]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:43.056634 kubelet[2511]: I0625 18:27:43.056542 2511 topology_manager.go:215] "Topology Admit Handler" podUID="f110ba9e-bd46-4028-bdd4-35cbbd37d921" podNamespace="kube-system" podName="cilium-49t95" Jun 25 18:27:43.056634 kubelet[2511]: E0625 18:27:43.056603 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" containerName="mount-cgroup" Jun 25 18:27:43.056634 kubelet[2511]: E0625 18:27:43.056614 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" containerName="apply-sysctl-overwrites" Jun 25 18:27:43.056634 kubelet[2511]: E0625 18:27:43.056620 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1" containerName="cilium-operator" Jun 25 18:27:43.057502 kubelet[2511]: E0625 18:27:43.056831 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" containerName="mount-bpf-fs" Jun 25 18:27:43.057502 kubelet[2511]: E0625 18:27:43.056847 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" containerName="clean-cilium-state" Jun 25 18:27:43.057502 kubelet[2511]: E0625 18:27:43.056855 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" containerName="cilium-agent" Jun 25 18:27:43.057502 kubelet[2511]: I0625 18:27:43.056895 2511 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14a62a6-fd3c-4f8c-bf44-8ee7fbce0ba1" containerName="cilium-operator" Jun 25 18:27:43.057502 kubelet[2511]: I0625 18:27:43.056904 2511 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f3bdf3-2bbf-4818-9bb3-cec0b3d70b3c" containerName="cilium-agent" Jun 25 18:27:43.060149 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:53072.service: Deactivated successfully. Jun 25 18:27:43.064294 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:27:43.064461 systemd[1]: session-23.scope: Consumed 1.286s CPU time. Jun 25 18:27:43.066048 systemd-logind[1415]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:27:43.079988 systemd[1]: Started sshd@23-10.0.0.53:22-10.0.0.1:53074.service - OpenSSH per-connection server daemon (10.0.0.1:53074). Jun 25 18:27:43.082432 systemd-logind[1415]: Removed session 23. Jun 25 18:27:43.090207 systemd[1]: Created slice kubepods-burstable-podf110ba9e_bd46_4028_bdd4_35cbbd37d921.slice - libcontainer container kubepods-burstable-podf110ba9e_bd46_4028_bdd4_35cbbd37d921.slice. Jun 25 18:27:43.114085 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 53074 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:43.116195 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:43.119964 systemd-logind[1415]: New session 24 of user core. Jun 25 18:27:43.128029 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:27:43.162011 kubelet[2511]: I0625 18:27:43.161978 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jqm\" (UniqueName: \"kubernetes.io/projected/f110ba9e-bd46-4028-bdd4-35cbbd37d921-kube-api-access-d6jqm\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162207 kubelet[2511]: I0625 18:27:43.162026 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f110ba9e-bd46-4028-bdd4-35cbbd37d921-clustermesh-secrets\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162207 kubelet[2511]: I0625 18:27:43.162049 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-host-proc-sys-kernel\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162207 kubelet[2511]: I0625 18:27:43.162069 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-cilium-run\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162207 kubelet[2511]: I0625 18:27:43.162090 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-host-proc-sys-net\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162207 kubelet[2511]: I0625 18:27:43.162110 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-etc-cni-netd\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162207 kubelet[2511]: I0625 18:27:43.162128 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-cni-path\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162367 kubelet[2511]: I0625 18:27:43.162148 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-lib-modules\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162367 kubelet[2511]: I0625 18:27:43.162168 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-xtables-lock\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162367 kubelet[2511]: I0625 18:27:43.162185 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f110ba9e-bd46-4028-bdd4-35cbbd37d921-hubble-tls\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162367 kubelet[2511]: I0625 18:27:43.162204 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-bpf-maps\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162367 kubelet[2511]: I0625 18:27:43.162222 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-hostproc\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162367 kubelet[2511]: I0625 18:27:43.162243 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f110ba9e-bd46-4028-bdd4-35cbbd37d921-cilium-cgroup\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162488 kubelet[2511]: I0625 18:27:43.162262 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f110ba9e-bd46-4028-bdd4-35cbbd37d921-cilium-config-path\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.162488 kubelet[2511]: I0625 18:27:43.162280 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f110ba9e-bd46-4028-bdd4-35cbbd37d921-cilium-ipsec-secrets\") pod \"cilium-49t95\" (UID: \"f110ba9e-bd46-4028-bdd4-35cbbd37d921\") " pod="kube-system/cilium-49t95" Jun 25 18:27:43.178671 sshd[4308]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:43.188668 systemd[1]: sshd@23-10.0.0.53:22-10.0.0.1:53074.service: Deactivated successfully. Jun 25 18:27:43.191380 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:27:43.192544 systemd-logind[1415]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:27:43.193708 systemd[1]: Started sshd@24-10.0.0.53:22-10.0.0.1:53076.service - OpenSSH per-connection server daemon (10.0.0.1:53076). Jun 25 18:27:43.194674 systemd-logind[1415]: Removed session 24. Jun 25 18:27:43.229767 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 53076 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:27:43.230977 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:27:43.235168 systemd-logind[1415]: New session 25 of user core. Jun 25 18:27:43.251063 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:27:43.394916 kubelet[2511]: E0625 18:27:43.394545 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:43.396138 containerd[1436]: time="2024-06-25T18:27:43.395103899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-49t95,Uid:f110ba9e-bd46-4028-bdd4-35cbbd37d921,Namespace:kube-system,Attempt:0,}" Jun 25 18:27:43.413356 containerd[1436]: time="2024-06-25T18:27:43.413114126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:27:43.413356 containerd[1436]: time="2024-06-25T18:27:43.413181046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:27:43.413356 containerd[1436]: time="2024-06-25T18:27:43.413194846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:27:43.413356 containerd[1436]: time="2024-06-25T18:27:43.413204926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:27:43.431075 systemd[1]: Started cri-containerd-602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701.scope - libcontainer container 602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701. Jun 25 18:27:43.450401 containerd[1436]: time="2024-06-25T18:27:43.450343583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-49t95,Uid:f110ba9e-bd46-4028-bdd4-35cbbd37d921,Namespace:kube-system,Attempt:0,} returns sandbox id \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\"" Jun 25 18:27:43.451728 kubelet[2511]: E0625 18:27:43.451227 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:43.454415 containerd[1436]: time="2024-06-25T18:27:43.454341989Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:27:43.465546 containerd[1436]: time="2024-06-25T18:27:43.465501726Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06a56e58aeeb70feab5171c6adb342076c52ec0e60cfdd51bdcffa919d1af457\"" Jun 25 18:27:43.466181 containerd[1436]: time="2024-06-25T18:27:43.466161927Z" level=info msg="StartContainer for \"06a56e58aeeb70feab5171c6adb342076c52ec0e60cfdd51bdcffa919d1af457\"" Jun 25 18:27:43.493045 systemd[1]: Started cri-containerd-06a56e58aeeb70feab5171c6adb342076c52ec0e60cfdd51bdcffa919d1af457.scope - libcontainer container 06a56e58aeeb70feab5171c6adb342076c52ec0e60cfdd51bdcffa919d1af457. Jun 25 18:27:43.512337 containerd[1436]: time="2024-06-25T18:27:43.512291517Z" level=info msg="StartContainer for \"06a56e58aeeb70feab5171c6adb342076c52ec0e60cfdd51bdcffa919d1af457\" returns successfully" Jun 25 18:27:43.531512 systemd[1]: cri-containerd-06a56e58aeeb70feab5171c6adb342076c52ec0e60cfdd51bdcffa919d1af457.scope: Deactivated successfully. Jun 25 18:27:43.559877 containerd[1436]: time="2024-06-25T18:27:43.559693068Z" level=info msg="shim disconnected" id=06a56e58aeeb70feab5171c6adb342076c52ec0e60cfdd51bdcffa919d1af457 namespace=k8s.io Jun 25 18:27:43.559877 containerd[1436]: time="2024-06-25T18:27:43.559752669Z" level=warning msg="cleaning up after shim disconnected" id=06a56e58aeeb70feab5171c6adb342076c52ec0e60cfdd51bdcffa919d1af457 namespace=k8s.io Jun 25 18:27:43.559877 containerd[1436]: time="2024-06-25T18:27:43.559763109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:27:44.512973 kubelet[2511]: E0625 18:27:44.512868 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:44.516500 containerd[1436]: time="2024-06-25T18:27:44.516313405Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:27:44.541977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount881386917.mount: Deactivated successfully. Jun 25 18:27:44.542333 containerd[1436]: time="2024-06-25T18:27:44.542103103Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943\"" Jun 25 18:27:44.542897 containerd[1436]: time="2024-06-25T18:27:44.542854584Z" level=info msg="StartContainer for \"e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943\"" Jun 25 18:27:44.571086 systemd[1]: Started cri-containerd-e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943.scope - libcontainer container e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943. Jun 25 18:27:44.603674 systemd[1]: cri-containerd-e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943.scope: Deactivated successfully. Jun 25 18:27:44.612382 containerd[1436]: time="2024-06-25T18:27:44.612312499Z" level=info msg="StartContainer for \"e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943\" returns successfully" Jun 25 18:27:44.636419 containerd[1436]: time="2024-06-25T18:27:44.636335993Z" level=info msg="shim disconnected" id=e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943 namespace=k8s.io Jun 25 18:27:44.636419 containerd[1436]: time="2024-06-25T18:27:44.636403433Z" level=warning msg="cleaning up after shim disconnected" id=e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943 namespace=k8s.io Jun 25 18:27:44.636419 containerd[1436]: time="2024-06-25T18:27:44.636416513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:27:45.267551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e503e39031985f8c479b583a9eb90b3bce49af66fdd6182042ff813ca3fb6943-rootfs.mount: Deactivated successfully. Jun 25 18:27:45.516301 kubelet[2511]: E0625 18:27:45.516256 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:45.519040 containerd[1436]: time="2024-06-25T18:27:45.518812595Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:27:45.547443 containerd[1436]: time="2024-06-25T18:27:45.547386718Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82\"" Jun 25 18:27:45.547950 containerd[1436]: time="2024-06-25T18:27:45.547924240Z" level=info msg="StartContainer for \"a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82\"" Jun 25 18:27:45.579061 systemd[1]: Started cri-containerd-a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82.scope - libcontainer container a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82. Jun 25 18:27:45.600845 containerd[1436]: time="2024-06-25T18:27:45.600776634Z" level=info msg="StartContainer for \"a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82\" returns successfully" Jun 25 18:27:45.605403 systemd[1]: cri-containerd-a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82.scope: Deactivated successfully. Jun 25 18:27:45.626916 containerd[1436]: time="2024-06-25T18:27:45.626627229Z" level=info msg="shim disconnected" id=a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82 namespace=k8s.io Jun 25 18:27:45.626916 containerd[1436]: time="2024-06-25T18:27:45.626685429Z" level=warning msg="cleaning up after shim disconnected" id=a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82 namespace=k8s.io Jun 25 18:27:45.626916 containerd[1436]: time="2024-06-25T18:27:45.626693669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:27:45.635890 containerd[1436]: time="2024-06-25T18:27:45.635832936Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:27:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:27:46.267765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a221b87e20bff4896b1dc4e2c90861ba9ddfe92b7316c455207e6258e8614c82-rootfs.mount: Deactivated successfully. Jun 25 18:27:46.320578 kubelet[2511]: E0625 18:27:46.320521 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:46.519958 kubelet[2511]: E0625 18:27:46.519854 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:46.522418 containerd[1436]: time="2024-06-25T18:27:46.522363629Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:27:46.534402 containerd[1436]: time="2024-06-25T18:27:46.534356112Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24\"" Jun 25 18:27:46.535034 containerd[1436]: time="2024-06-25T18:27:46.535003035Z" level=info msg="StartContainer for \"cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24\"" Jun 25 18:27:46.572127 systemd[1]: Started cri-containerd-cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24.scope - libcontainer container cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24. Jun 25 18:27:46.590139 systemd[1]: cri-containerd-cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24.scope: Deactivated successfully. Jun 25 18:27:46.592375 containerd[1436]: time="2024-06-25T18:27:46.592133159Z" level=info msg="StartContainer for \"cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24\" returns successfully" Jun 25 18:27:46.615211 containerd[1436]: time="2024-06-25T18:27:46.615142442Z" level=info msg="shim disconnected" id=cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24 namespace=k8s.io Jun 25 18:27:46.615211 containerd[1436]: time="2024-06-25T18:27:46.615203642Z" level=warning msg="cleaning up after shim disconnected" id=cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24 namespace=k8s.io Jun 25 18:27:46.615211 containerd[1436]: time="2024-06-25T18:27:46.615212882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:27:47.267739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf7cabe8996c17fb3df2492d7c82f273784e55ef08f903e8e7845b5fc2064f24-rootfs.mount: Deactivated successfully. Jun 25 18:27:47.384773 kubelet[2511]: E0625 18:27:47.384709 2511 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:27:47.524491 kubelet[2511]: E0625 18:27:47.523398 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:47.527666 containerd[1436]: time="2024-06-25T18:27:47.527437330Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:27:47.543000 containerd[1436]: time="2024-06-25T18:27:47.542950676Z" level=info msg="CreateContainer within sandbox \"602fadbd44d6c629ad373c2d564c6d7993ba0dae80f99053632f632e13242701\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0bea91e6fe5ae1635f913ee01117ed17c3496e9b80580347f5b23b27fd0ed02c\"" Jun 25 18:27:47.543413 containerd[1436]: time="2024-06-25T18:27:47.543383838Z" level=info msg="StartContainer for \"0bea91e6fe5ae1635f913ee01117ed17c3496e9b80580347f5b23b27fd0ed02c\"" Jun 25 18:27:47.568058 systemd[1]: Started cri-containerd-0bea91e6fe5ae1635f913ee01117ed17c3496e9b80580347f5b23b27fd0ed02c.scope - libcontainer container 0bea91e6fe5ae1635f913ee01117ed17c3496e9b80580347f5b23b27fd0ed02c. Jun 25 18:27:47.591320 containerd[1436]: time="2024-06-25T18:27:47.591267160Z" level=info msg="StartContainer for \"0bea91e6fe5ae1635f913ee01117ed17c3496e9b80580347f5b23b27fd0ed02c\" returns successfully" Jun 25 18:27:47.857915 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 25 18:27:48.528824 kubelet[2511]: E0625 18:27:48.528465 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:48.542031 kubelet[2511]: I0625 18:27:48.541988 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-49t95" podStartSLOduration=5.541954038 podStartE2EDuration="5.541954038s" podCreationTimestamp="2024-06-25 18:27:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:27:48.541944318 +0000 UTC m=+81.311025363" watchObservedRunningTime="2024-06-25 18:27:48.541954038 +0000 UTC m=+81.311035083" Jun 25 18:27:49.341809 kubelet[2511]: I0625 18:27:49.341764 2511 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-25T18:27:49Z","lastTransitionTime":"2024-06-25T18:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 25 18:27:49.528956 systemd[1]: run-containerd-runc-k8s.io-0bea91e6fe5ae1635f913ee01117ed17c3496e9b80580347f5b23b27fd0ed02c-runc.VUix3X.mount: Deactivated successfully. Jun 25 18:27:49.536162 kubelet[2511]: E0625 18:27:49.531006 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:50.532819 kubelet[2511]: E0625 18:27:50.532751 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:50.612081 systemd-networkd[1376]: lxc_health: Link UP Jun 25 18:27:50.620160 systemd-networkd[1376]: lxc_health: Gained carrier Jun 25 18:27:51.537577 kubelet[2511]: E0625 18:27:51.537543 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:51.707043 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jun 25 18:27:52.538939 kubelet[2511]: E0625 18:27:52.538851 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:27:55.913565 sshd[4316]: pam_unix(sshd:session): session closed for user core Jun 25 18:27:55.916792 systemd[1]: sshd@24-10.0.0.53:22-10.0.0.1:53076.service: Deactivated successfully. Jun 25 18:27:55.918553 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:27:55.919223 systemd-logind[1415]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:27:55.920200 systemd-logind[1415]: Removed session 25.