Jul 2 09:02:10.910528 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 09:02:10.910550 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 09:02:10.910559 kernel: KASLR enabled Jul 2 09:02:10.910565 kernel: efi: EFI v2.7 by EDK II Jul 2 09:02:10.910570 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 09:02:10.910576 kernel: random: crng init done Jul 2 09:02:10.910583 kernel: ACPI: Early table checksum verification disabled Jul 2 09:02:10.910589 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 09:02:10.910596 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 09:02:10.910603 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910610 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910616 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910622 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910628 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910635 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910643 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910650 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910656 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:02:10.910662 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 09:02:10.910668 kernel: NUMA: Failed to initialise from firmware Jul 2 09:02:10.910675 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:02:10.910681 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 2 09:02:10.910687 kernel: Zone ranges: Jul 2 09:02:10.910694 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:02:10.910700 kernel: DMA32 empty Jul 2 09:02:10.910709 kernel: Normal empty Jul 2 09:02:10.910715 kernel: Movable zone start for each node Jul 2 09:02:10.910721 kernel: Early memory node ranges Jul 2 09:02:10.910727 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 09:02:10.910734 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 09:02:10.910740 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 09:02:10.910746 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 09:02:10.910752 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 09:02:10.910758 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 09:02:10.910765 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 09:02:10.910771 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:02:10.910777 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 09:02:10.910785 kernel: psci: probing for conduit method from ACPI. Jul 2 09:02:10.910791 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 09:02:10.910798 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 09:02:10.910806 kernel: psci: Trusted OS migration not required Jul 2 09:02:10.910813 kernel: psci: SMC Calling Convention v1.1 Jul 2 09:02:10.910820 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 09:02:10.910828 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 09:02:10.910835 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 09:02:10.910850 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 09:02:10.910857 kernel: Detected PIPT I-cache on CPU0 Jul 2 09:02:10.910864 kernel: CPU features: detected: GIC system register CPU interface Jul 2 09:02:10.910870 kernel: CPU features: detected: Hardware dirty bit management Jul 2 09:02:10.910877 kernel: CPU features: detected: Spectre-v4 Jul 2 09:02:10.910883 kernel: CPU features: detected: Spectre-BHB Jul 2 09:02:10.910890 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 09:02:10.910897 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 09:02:10.910906 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 09:02:10.910912 kernel: alternatives: applying boot alternatives Jul 2 09:02:10.910920 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:02:10.910927 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 09:02:10.910933 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 09:02:10.910940 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:02:10.910946 kernel: Fallback order for Node 0: 0 Jul 2 09:02:10.910953 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 09:02:10.910960 kernel: Policy zone: DMA Jul 2 09:02:10.910966 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 09:02:10.910973 kernel: software IO TLB: area num 4. Jul 2 09:02:10.910981 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 09:02:10.910988 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jul 2 09:02:10.910998 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 09:02:10.911005 kernel: trace event string verifier disabled Jul 2 09:02:10.911012 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 09:02:10.911019 kernel: rcu: RCU event tracing is enabled. Jul 2 09:02:10.911026 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 09:02:10.911033 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 09:02:10.911040 kernel: Tracing variant of Tasks RCU enabled. Jul 2 09:02:10.911047 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 09:02:10.911053 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 09:02:10.911060 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 09:02:10.911069 kernel: GICv3: 256 SPIs implemented Jul 2 09:02:10.911076 kernel: GICv3: 0 Extended SPIs implemented Jul 2 09:02:10.911082 kernel: Root IRQ handler: gic_handle_irq Jul 2 09:02:10.911089 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 09:02:10.911096 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 09:02:10.911102 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 09:02:10.911109 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 09:02:10.911116 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 09:02:10.911123 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 09:02:10.911130 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 09:02:10.911137 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 09:02:10.911145 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:02:10.911151 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 09:02:10.911158 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 09:02:10.911165 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 09:02:10.911171 kernel: arm-pv: using stolen time PV Jul 2 09:02:10.911178 kernel: Console: colour dummy device 80x25 Jul 2 09:02:10.911185 kernel: ACPI: Core revision 20230628 Jul 2 09:02:10.911192 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 09:02:10.911199 kernel: pid_max: default: 32768 minimum: 301 Jul 2 09:02:10.911206 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 09:02:10.911214 kernel: SELinux: Initializing. Jul 2 09:02:10.911221 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:02:10.911228 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:02:10.911235 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:02:10.911242 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:02:10.911249 kernel: rcu: Hierarchical SRCU implementation. Jul 2 09:02:10.911256 kernel: rcu: Max phase no-delay instances is 400. Jul 2 09:02:10.911263 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 09:02:10.911270 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 09:02:10.911293 kernel: Remapping and enabling EFI services. Jul 2 09:02:10.911301 kernel: smp: Bringing up secondary CPUs ... Jul 2 09:02:10.911308 kernel: Detected PIPT I-cache on CPU1 Jul 2 09:02:10.911315 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 09:02:10.911322 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 09:02:10.911329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:02:10.911336 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 09:02:10.911343 kernel: Detected PIPT I-cache on CPU2 Jul 2 09:02:10.911349 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 09:02:10.911357 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 09:02:10.911366 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:02:10.911373 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 09:02:10.911385 kernel: Detected PIPT I-cache on CPU3 Jul 2 09:02:10.911394 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 09:02:10.911402 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 09:02:10.911409 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:02:10.911416 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 09:02:10.911423 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 09:02:10.911430 kernel: SMP: Total of 4 processors activated. Jul 2 09:02:10.911439 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 09:02:10.911446 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 09:02:10.911454 kernel: CPU features: detected: Common not Private translations Jul 2 09:02:10.911461 kernel: CPU features: detected: CRC32 instructions Jul 2 09:02:10.911468 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 09:02:10.911475 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 09:02:10.911482 kernel: CPU features: detected: LSE atomic instructions Jul 2 09:02:10.911489 kernel: CPU features: detected: Privileged Access Never Jul 2 09:02:10.911498 kernel: CPU features: detected: RAS Extension Support Jul 2 09:02:10.911505 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 09:02:10.911512 kernel: CPU: All CPU(s) started at EL1 Jul 2 09:02:10.911520 kernel: alternatives: applying system-wide alternatives Jul 2 09:02:10.911527 kernel: devtmpfs: initialized Jul 2 09:02:10.911534 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 09:02:10.911541 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 09:02:10.911548 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 09:02:10.911555 kernel: SMBIOS 3.0.0 present. Jul 2 09:02:10.911564 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 09:02:10.911571 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 09:02:10.911579 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 09:02:10.911586 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 09:02:10.911594 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 09:02:10.911601 kernel: audit: initializing netlink subsys (disabled) Jul 2 09:02:10.911608 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 2 09:02:10.911616 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 09:02:10.911623 kernel: cpuidle: using governor menu Jul 2 09:02:10.911632 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 09:02:10.911639 kernel: ASID allocator initialised with 32768 entries Jul 2 09:02:10.911646 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 09:02:10.911654 kernel: Serial: AMBA PL011 UART driver Jul 2 09:02:10.911661 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 09:02:10.911668 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 09:02:10.911675 kernel: Modules: 509120 pages in range for PLT usage Jul 2 09:02:10.911682 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 09:02:10.911689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 09:02:10.911698 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 09:02:10.911705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 09:02:10.911712 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 09:02:10.911720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 09:02:10.911727 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 09:02:10.911734 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 09:02:10.911742 kernel: ACPI: Added _OSI(Module Device) Jul 2 09:02:10.911749 kernel: ACPI: Added _OSI(Processor Device) Jul 2 09:02:10.911757 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 09:02:10.911765 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 09:02:10.911772 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 09:02:10.911780 kernel: ACPI: Interpreter enabled Jul 2 09:02:10.911787 kernel: ACPI: Using GIC for interrupt routing Jul 2 09:02:10.911794 kernel: ACPI: MCFG table detected, 1 entries Jul 2 09:02:10.911801 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 09:02:10.911809 kernel: printk: console [ttyAMA0] enabled Jul 2 09:02:10.911816 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 09:02:10.911957 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 09:02:10.912035 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 09:02:10.912100 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 09:02:10.912165 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 09:02:10.912227 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 09:02:10.912237 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 09:02:10.912244 kernel: PCI host bridge to bus 0000:00 Jul 2 09:02:10.912405 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 09:02:10.912473 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 09:02:10.912531 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 09:02:10.912589 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 09:02:10.912667 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 09:02:10.912741 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 09:02:10.912806 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 09:02:10.912884 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 09:02:10.912950 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:02:10.913015 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:02:10.913079 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 09:02:10.913145 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 09:02:10.913205 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 09:02:10.913262 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 09:02:10.913332 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 09:02:10.913343 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 09:02:10.913350 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 09:02:10.913358 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 09:02:10.913365 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 09:02:10.913372 kernel: iommu: Default domain type: Translated Jul 2 09:02:10.913379 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 09:02:10.913387 kernel: efivars: Registered efivars operations Jul 2 09:02:10.913394 kernel: vgaarb: loaded Jul 2 09:02:10.913404 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 09:02:10.913411 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 09:02:10.913419 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 09:02:10.913426 kernel: pnp: PnP ACPI init Jul 2 09:02:10.913505 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 09:02:10.913515 kernel: pnp: PnP ACPI: found 1 devices Jul 2 09:02:10.913522 kernel: NET: Registered PF_INET protocol family Jul 2 09:02:10.913530 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:02:10.913539 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 09:02:10.913547 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 09:02:10.913554 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 09:02:10.913561 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 09:02:10.913569 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 09:02:10.913576 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:02:10.913583 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:02:10.913591 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 09:02:10.913598 kernel: PCI: CLS 0 bytes, default 64 Jul 2 09:02:10.913607 kernel: kvm [1]: HYP mode not available Jul 2 09:02:10.913614 kernel: Initialise system trusted keyrings Jul 2 09:02:10.913621 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 09:02:10.913629 kernel: Key type asymmetric registered Jul 2 09:02:10.913636 kernel: Asymmetric key parser 'x509' registered Jul 2 09:02:10.913643 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 09:02:10.913650 kernel: io scheduler mq-deadline registered Jul 2 09:02:10.913657 kernel: io scheduler kyber registered Jul 2 09:02:10.913664 kernel: io scheduler bfq registered Jul 2 09:02:10.913673 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 09:02:10.913681 kernel: ACPI: button: Power Button [PWRB] Jul 2 09:02:10.913688 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 09:02:10.913755 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 09:02:10.913765 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 09:02:10.913772 kernel: thunder_xcv, ver 1.0 Jul 2 09:02:10.913779 kernel: thunder_bgx, ver 1.0 Jul 2 09:02:10.913786 kernel: nicpf, ver 1.0 Jul 2 09:02:10.913793 kernel: nicvf, ver 1.0 Jul 2 09:02:10.913875 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 09:02:10.913939 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T09:02:10 UTC (1719910930) Jul 2 09:02:10.913948 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 09:02:10.913956 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 09:02:10.913963 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 09:02:10.913970 kernel: watchdog: Hard watchdog permanently disabled Jul 2 09:02:10.913977 kernel: NET: Registered PF_INET6 protocol family Jul 2 09:02:10.913984 kernel: Segment Routing with IPv6 Jul 2 09:02:10.913994 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 09:02:10.914001 kernel: NET: Registered PF_PACKET protocol family Jul 2 09:02:10.914008 kernel: Key type dns_resolver registered Jul 2 09:02:10.914015 kernel: registered taskstats version 1 Jul 2 09:02:10.914022 kernel: Loading compiled-in X.509 certificates Jul 2 09:02:10.914030 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 09:02:10.914037 kernel: Key type .fscrypt registered Jul 2 09:02:10.914044 kernel: Key type fscrypt-provisioning registered Jul 2 09:02:10.914051 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 09:02:10.914060 kernel: ima: Allocated hash algorithm: sha1 Jul 2 09:02:10.914067 kernel: ima: No architecture policies found Jul 2 09:02:10.914075 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 09:02:10.914082 kernel: clk: Disabling unused clocks Jul 2 09:02:10.914089 kernel: Freeing unused kernel memory: 39040K Jul 2 09:02:10.914096 kernel: Run /init as init process Jul 2 09:02:10.914103 kernel: with arguments: Jul 2 09:02:10.914110 kernel: /init Jul 2 09:02:10.914117 kernel: with environment: Jul 2 09:02:10.914126 kernel: HOME=/ Jul 2 09:02:10.914133 kernel: TERM=linux Jul 2 09:02:10.914140 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 09:02:10.914149 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:02:10.914159 systemd[1]: Detected virtualization kvm. Jul 2 09:02:10.914166 systemd[1]: Detected architecture arm64. Jul 2 09:02:10.914174 systemd[1]: Running in initrd. Jul 2 09:02:10.914183 systemd[1]: No hostname configured, using default hostname. Jul 2 09:02:10.914190 systemd[1]: Hostname set to . Jul 2 09:02:10.914198 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:02:10.914206 systemd[1]: Queued start job for default target initrd.target. Jul 2 09:02:10.914213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:02:10.914221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:02:10.914229 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 09:02:10.914237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:02:10.914247 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 09:02:10.914255 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 09:02:10.914264 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 09:02:10.914272 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 09:02:10.914297 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:02:10.914306 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:02:10.914314 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:02:10.914325 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:02:10.914333 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:02:10.914341 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:02:10.914349 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:02:10.914357 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:02:10.914364 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 09:02:10.914372 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 09:02:10.914380 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:02:10.914388 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:02:10.914397 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:02:10.914405 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:02:10.914413 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 09:02:10.914421 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:02:10.914428 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 09:02:10.914436 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 09:02:10.914444 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:02:10.914451 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:02:10.914460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:02:10.914468 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 09:02:10.914476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:02:10.914484 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 09:02:10.914492 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:02:10.914502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:02:10.914510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:02:10.914535 systemd-journald[238]: Collecting audit messages is disabled. Jul 2 09:02:10.914554 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:02:10.914564 systemd-journald[238]: Journal started Jul 2 09:02:10.914583 systemd-journald[238]: Runtime Journal (/run/log/journal/ca6db50074784d519c912d3bc9e72443) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:02:10.905810 systemd-modules-load[239]: Inserted module 'overlay' Jul 2 09:02:10.917607 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:02:10.921315 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 09:02:10.922344 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 2 09:02:10.923306 kernel: Bridge firewalling registered Jul 2 09:02:10.928472 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:02:10.930128 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:02:10.931995 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:02:10.933307 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:02:10.936723 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 09:02:10.939457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:02:10.941190 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:02:10.942337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:02:10.951443 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:02:10.953609 dracut-cmdline[269]: dracut-dracut-053 Jul 2 09:02:10.959040 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:02:10.958449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:02:10.983139 systemd-resolved[283]: Positive Trust Anchors: Jul 2 09:02:10.983155 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:02:10.983185 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:02:10.987692 systemd-resolved[283]: Defaulting to hostname 'linux'. Jul 2 09:02:10.988612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:02:10.991300 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:02:11.030307 kernel: SCSI subsystem initialized Jul 2 09:02:11.036297 kernel: Loading iSCSI transport class v2.0-870. Jul 2 09:02:11.044337 kernel: iscsi: registered transport (tcp) Jul 2 09:02:11.056438 kernel: iscsi: registered transport (qla4xxx) Jul 2 09:02:11.056481 kernel: QLogic iSCSI HBA Driver Jul 2 09:02:11.097701 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 09:02:11.107435 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 09:02:11.125637 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 09:02:11.125691 kernel: device-mapper: uevent: version 1.0.3 Jul 2 09:02:11.125705 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 09:02:11.171306 kernel: raid6: neonx8 gen() 15744 MB/s Jul 2 09:02:11.188294 kernel: raid6: neonx4 gen() 15597 MB/s Jul 2 09:02:11.205302 kernel: raid6: neonx2 gen() 13190 MB/s Jul 2 09:02:11.222295 kernel: raid6: neonx1 gen() 10448 MB/s Jul 2 09:02:11.239320 kernel: raid6: int64x8 gen() 6956 MB/s Jul 2 09:02:11.256301 kernel: raid6: int64x4 gen() 7305 MB/s Jul 2 09:02:11.273309 kernel: raid6: int64x2 gen() 6130 MB/s Jul 2 09:02:11.290294 kernel: raid6: int64x1 gen() 5056 MB/s Jul 2 09:02:11.290310 kernel: raid6: using algorithm neonx8 gen() 15744 MB/s Jul 2 09:02:11.307312 kernel: raid6: .... xor() 11948 MB/s, rmw enabled Jul 2 09:02:11.307333 kernel: raid6: using neon recovery algorithm Jul 2 09:02:11.312499 kernel: xor: measuring software checksum speed Jul 2 09:02:11.312522 kernel: 8regs : 19854 MB/sec Jul 2 09:02:11.313370 kernel: 32regs : 19716 MB/sec Jul 2 09:02:11.314528 kernel: arm64_neon : 27125 MB/sec Jul 2 09:02:11.314541 kernel: xor: using function: arm64_neon (27125 MB/sec) Jul 2 09:02:11.367302 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 09:02:11.378343 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:02:11.390427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:02:11.402378 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 2 09:02:11.405515 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:02:11.423528 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 09:02:11.435410 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 2 09:02:11.460949 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:02:11.473500 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:02:11.511733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:02:11.522695 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 09:02:11.534490 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 09:02:11.535764 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:02:11.538943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:02:11.541103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:02:11.549424 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 09:02:11.560154 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 09:02:11.570371 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 09:02:11.570487 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 09:02:11.570498 kernel: GPT:9289727 != 19775487 Jul 2 09:02:11.570508 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 09:02:11.570517 kernel: GPT:9289727 != 19775487 Jul 2 09:02:11.570525 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 09:02:11.570535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:02:11.562498 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:02:11.569636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:02:11.569735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:02:11.572406 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:02:11.573172 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:02:11.573306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:02:11.575182 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:02:11.589516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:02:11.594313 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (522) Jul 2 09:02:11.596297 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (519) Jul 2 09:02:11.602870 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 09:02:11.606682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:02:11.614351 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 09:02:11.617924 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 09:02:11.618798 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 09:02:11.623684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:02:11.638435 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 09:02:11.640384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:02:11.644765 disk-uuid[551]: Primary Header is updated. Jul 2 09:02:11.644765 disk-uuid[551]: Secondary Entries is updated. Jul 2 09:02:11.644765 disk-uuid[551]: Secondary Header is updated. Jul 2 09:02:11.651309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:02:11.662500 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:02:12.660302 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:02:12.661095 disk-uuid[553]: The operation has completed successfully. Jul 2 09:02:12.689416 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 09:02:12.689515 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 09:02:12.705447 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 09:02:12.708558 sh[574]: Success Jul 2 09:02:12.727254 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 09:02:12.762275 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 09:02:12.777673 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 09:02:12.779714 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 09:02:12.798417 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 09:02:12.798483 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:02:12.798504 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 09:02:12.799683 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 09:02:12.799700 kernel: BTRFS info (device dm-0): using free space tree Jul 2 09:02:12.804994 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 09:02:12.806215 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 09:02:12.813463 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 09:02:12.817133 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 09:02:12.828633 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:02:12.828676 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:02:12.828687 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:02:12.831386 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:02:12.839890 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 09:02:12.841421 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:02:12.848228 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 09:02:12.853438 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 09:02:12.926565 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:02:12.935438 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:02:12.947202 ignition[675]: Ignition 2.18.0 Jul 2 09:02:12.947211 ignition[675]: Stage: fetch-offline Jul 2 09:02:12.947245 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:02:12.947253 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:02:12.947357 ignition[675]: parsed url from cmdline: "" Jul 2 09:02:12.947360 ignition[675]: no config URL provided Jul 2 09:02:12.947365 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:02:12.947372 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:02:12.947398 ignition[675]: op(1): [started] loading QEMU firmware config module Jul 2 09:02:12.947403 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 09:02:12.960763 systemd-networkd[768]: lo: Link UP Jul 2 09:02:12.960776 systemd-networkd[768]: lo: Gained carrier Jul 2 09:02:12.961489 systemd-networkd[768]: Enumeration completed Jul 2 09:02:12.962848 ignition[675]: op(1): [finished] loading QEMU firmware config module Jul 2 09:02:12.961643 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:02:12.961927 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:02:12.961931 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:02:12.962713 systemd-networkd[768]: eth0: Link UP Jul 2 09:02:12.962716 systemd-networkd[768]: eth0: Gained carrier Jul 2 09:02:12.962723 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:02:12.962999 systemd[1]: Reached target network.target - Network. Jul 2 09:02:12.981325 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:02:13.009884 ignition[675]: parsing config with SHA512: 605c470368ab09b20e03a946be3a63e9c418f4dd3613572d1368fbc9c30f5915952ec93bf0b164dda2f809e146c77a2819e531088e121a523c68aa95aedfb28a Jul 2 09:02:13.015527 unknown[675]: fetched base config from "system" Jul 2 09:02:13.015537 unknown[675]: fetched user config from "qemu" Jul 2 09:02:13.015971 ignition[675]: fetch-offline: fetch-offline passed Jul 2 09:02:13.017526 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:02:13.016028 ignition[675]: Ignition finished successfully Jul 2 09:02:13.019023 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 09:02:13.030541 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 09:02:13.042051 ignition[774]: Ignition 2.18.0 Jul 2 09:02:13.042073 ignition[774]: Stage: kargs Jul 2 09:02:13.042220 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:02:13.042229 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:02:13.043079 ignition[774]: kargs: kargs passed Jul 2 09:02:13.043123 ignition[774]: Ignition finished successfully Jul 2 09:02:13.046272 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 09:02:13.051437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 09:02:13.062779 ignition[782]: Ignition 2.18.0 Jul 2 09:02:13.062794 ignition[782]: Stage: disks Jul 2 09:02:13.062970 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:02:13.062979 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:02:13.063847 ignition[782]: disks: disks passed Jul 2 09:02:13.063891 ignition[782]: Ignition finished successfully Jul 2 09:02:13.068337 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 09:02:13.069644 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 09:02:13.071104 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 09:02:13.073069 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:02:13.076002 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:02:13.077668 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:02:13.092433 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 09:02:13.104771 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 09:02:13.109879 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 09:02:13.119970 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 09:02:13.170047 kernel: EXT4-fs (vda9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 09:02:13.169751 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 09:02:13.171294 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 09:02:13.183383 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:02:13.185565 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 09:02:13.186366 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 09:02:13.186403 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 09:02:13.186425 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:02:13.192188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 09:02:13.194392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 09:02:13.198389 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Jul 2 09:02:13.198419 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:02:13.199808 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:02:13.199845 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:02:13.204431 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:02:13.206002 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:02:13.244942 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 09:02:13.249996 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jul 2 09:02:13.254046 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 09:02:13.258211 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 09:02:13.332149 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 09:02:13.342954 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 09:02:13.346368 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 09:02:13.351313 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:02:13.372565 ignition[915]: INFO : Ignition 2.18.0 Jul 2 09:02:13.373514 ignition[915]: INFO : Stage: mount Jul 2 09:02:13.374003 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:02:13.374801 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:02:13.375023 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 09:02:13.376835 ignition[915]: INFO : mount: mount passed Jul 2 09:02:13.376835 ignition[915]: INFO : Ignition finished successfully Jul 2 09:02:13.377739 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 09:02:13.387435 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 09:02:13.797362 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 09:02:13.811446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:02:13.817294 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Jul 2 09:02:13.817323 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:02:13.818691 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:02:13.818705 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:02:13.821297 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:02:13.822240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:02:13.847703 ignition[948]: INFO : Ignition 2.18.0 Jul 2 09:02:13.847703 ignition[948]: INFO : Stage: files Jul 2 09:02:13.848879 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:02:13.848879 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:02:13.848879 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jul 2 09:02:13.851589 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 09:02:13.851589 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 09:02:13.851589 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 09:02:13.854472 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 09:02:13.854472 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 09:02:13.854472 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:02:13.854472 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 09:02:13.852047 unknown[948]: wrote ssh authorized keys file for user: core Jul 2 09:02:13.930674 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 09:02:13.983287 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:02:13.984783 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 09:02:13.984783 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 09:02:14.311420 systemd-networkd[768]: eth0: Gained IPv6LL Jul 2 09:02:14.348764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 09:02:14.401877 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:02:14.404491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:02:14.417863 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:02:14.417863 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:02:14.417863 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 09:02:14.641711 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 09:02:14.809041 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:02:14.809041 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 09:02:14.811620 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:02:14.811620 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:02:14.811620 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 09:02:14.811620 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 09:02:14.811620 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:02:14.811620 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:02:14.811620 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 09:02:14.811620 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 09:02:14.832160 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:02:14.835711 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:02:14.835711 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 09:02:14.835711 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 2 09:02:14.835711 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 09:02:14.842722 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:02:14.842722 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:02:14.842722 ignition[948]: INFO : files: files passed Jul 2 09:02:14.842722 ignition[948]: INFO : Ignition finished successfully Jul 2 09:02:14.839195 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 09:02:14.848477 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 09:02:14.850028 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 09:02:14.853425 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 09:02:14.853501 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 09:02:14.857492 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 09:02:14.860483 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:02:14.860483 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:02:14.863305 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:02:14.863121 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:02:14.864535 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 09:02:14.876454 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 09:02:14.894309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 09:02:14.894402 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 09:02:14.895998 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 09:02:14.897293 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 09:02:14.898016 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 09:02:14.899440 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 09:02:14.914179 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:02:14.926483 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 09:02:14.933642 systemd[1]: Stopped target network.target - Network. Jul 2 09:02:14.934387 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:02:14.935914 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:02:14.937593 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 09:02:14.939033 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 09:02:14.939138 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:02:14.941346 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 09:02:14.943382 systemd[1]: Stopped target basic.target - Basic System. Jul 2 09:02:14.944750 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 09:02:14.946153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:02:14.947754 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 09:02:14.949480 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 09:02:14.950991 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:02:14.952576 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 09:02:14.954187 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 09:02:14.955730 systemd[1]: Stopped target swap.target - Swaps. Jul 2 09:02:14.957096 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 09:02:14.957201 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:02:14.959190 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:02:14.960827 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:02:14.962377 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 09:02:14.963365 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:02:14.964992 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 09:02:14.965093 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 09:02:14.967602 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 09:02:14.967704 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:02:14.969398 systemd[1]: Stopped target paths.target - Path Units. Jul 2 09:02:14.970747 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 09:02:14.970857 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:02:14.972487 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 09:02:14.973761 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 09:02:14.975198 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 09:02:14.975287 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:02:14.977047 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 09:02:14.977123 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:02:14.978403 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 09:02:14.978503 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:02:14.979972 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 09:02:14.980062 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 09:02:14.991518 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 09:02:14.992248 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 09:02:14.992383 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:02:14.997536 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 09:02:14.998363 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 09:02:14.999773 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 09:02:15.001654 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 09:02:15.004193 ignition[1004]: INFO : Ignition 2.18.0 Jul 2 09:02:15.004193 ignition[1004]: INFO : Stage: umount Jul 2 09:02:15.004193 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:02:15.004193 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:02:15.004193 ignition[1004]: INFO : umount: umount passed Jul 2 09:02:15.004193 ignition[1004]: INFO : Ignition finished successfully Jul 2 09:02:15.001781 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:02:15.003405 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 09:02:15.003493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:02:15.007257 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 09:02:15.007492 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 09:02:15.012997 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 09:02:15.013182 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 09:02:15.013947 systemd-networkd[768]: eth0: DHCPv6 lease lost Jul 2 09:02:15.015765 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 09:02:15.015810 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 09:02:15.016678 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 09:02:15.016720 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 09:02:15.017462 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 09:02:15.017502 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 09:02:15.019096 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 09:02:15.019613 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 09:02:15.019735 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 09:02:15.022673 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 09:02:15.023172 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 09:02:15.024474 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 09:02:15.024558 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 09:02:15.028087 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 09:02:15.028132 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:02:15.035426 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 09:02:15.036749 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 09:02:15.036805 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:02:15.038404 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:02:15.038443 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:02:15.040032 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 09:02:15.040072 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 09:02:15.041468 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 09:02:15.041502 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:02:15.043261 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:02:15.054007 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 09:02:15.054098 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 09:02:15.058872 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 09:02:15.059010 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:02:15.062311 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 09:02:15.062349 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 09:02:15.063149 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 09:02:15.063181 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:02:15.063980 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 09:02:15.064027 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:02:15.066339 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 09:02:15.066385 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 09:02:15.068433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:02:15.068476 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:02:15.077421 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 09:02:15.078620 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 09:02:15.078671 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:02:15.080327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:02:15.080367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:02:15.082044 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 09:02:15.082127 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 09:02:15.083158 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 09:02:15.083220 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 09:02:15.085205 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 09:02:15.086687 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 09:02:15.086748 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 09:02:15.088694 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 09:02:15.098185 systemd[1]: Switching root. Jul 2 09:02:15.124247 systemd-journald[238]: Journal stopped Jul 2 09:02:15.788865 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 2 09:02:15.788925 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 09:02:15.788938 kernel: SELinux: policy capability open_perms=1 Jul 2 09:02:15.788947 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 09:02:15.788957 kernel: SELinux: policy capability always_check_network=0 Jul 2 09:02:15.788967 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 09:02:15.788976 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 09:02:15.788986 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 09:02:15.788996 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 09:02:15.789007 kernel: audit: type=1403 audit(1719910935.270:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 09:02:15.789018 systemd[1]: Successfully loaded SELinux policy in 30.767ms. Jul 2 09:02:15.789034 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.667ms. Jul 2 09:02:15.789049 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:02:15.789063 systemd[1]: Detected virtualization kvm. Jul 2 09:02:15.789073 systemd[1]: Detected architecture arm64. Jul 2 09:02:15.789084 systemd[1]: Detected first boot. Jul 2 09:02:15.789099 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:02:15.789110 zram_generator::config[1048]: No configuration found. Jul 2 09:02:15.789123 systemd[1]: Populated /etc with preset unit settings. Jul 2 09:02:15.789133 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 09:02:15.789143 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 09:02:15.789154 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 09:02:15.789165 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 09:02:15.789176 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 09:02:15.789187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 09:02:15.789198 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 09:02:15.789210 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 09:02:15.789221 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 09:02:15.789231 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 09:02:15.789242 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 09:02:15.789252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:02:15.789396 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:02:15.789420 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 09:02:15.789432 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 09:02:15.789446 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 09:02:15.789461 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:02:15.789472 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 09:02:15.789483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:02:15.789493 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 09:02:15.789503 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 09:02:15.789514 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 09:02:15.789524 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 09:02:15.789537 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:02:15.789548 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:02:15.789558 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:02:15.789570 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:02:15.789580 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 09:02:15.789590 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 09:02:15.789601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:02:15.789611 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:02:15.789622 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:02:15.789632 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 09:02:15.789644 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 09:02:15.789655 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 09:02:15.789665 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 09:02:15.789677 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 09:02:15.789687 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 09:02:15.789697 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 09:02:15.789709 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 09:02:15.789720 systemd[1]: Reached target machines.target - Containers. Jul 2 09:02:15.789732 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 09:02:15.789742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:02:15.789753 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:02:15.789763 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 09:02:15.789773 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:02:15.789784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:02:15.789794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:02:15.789804 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 09:02:15.789815 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:02:15.789834 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 09:02:15.789845 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 09:02:15.789855 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 09:02:15.789865 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 09:02:15.789875 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 09:02:15.789885 kernel: fuse: init (API version 7.39) Jul 2 09:02:15.789895 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:02:15.789906 kernel: loop: module loaded Jul 2 09:02:15.789916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:02:15.789928 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 09:02:15.789938 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 09:02:15.789949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:02:15.789959 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 09:02:15.789969 systemd[1]: Stopped verity-setup.service. Jul 2 09:02:15.789979 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 09:02:15.789989 kernel: ACPI: bus type drm_connector registered Jul 2 09:02:15.789999 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 09:02:15.790029 systemd-journald[1107]: Collecting audit messages is disabled. Jul 2 09:02:15.790053 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 09:02:15.790064 systemd-journald[1107]: Journal started Jul 2 09:02:15.790085 systemd-journald[1107]: Runtime Journal (/run/log/journal/ca6db50074784d519c912d3bc9e72443) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:02:15.618717 systemd[1]: Queued start job for default target multi-user.target. Jul 2 09:02:15.635641 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 09:02:15.635982 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 09:02:15.792499 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:02:15.793126 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 09:02:15.794198 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 09:02:15.795455 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 09:02:15.797600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:02:15.799012 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 09:02:15.799317 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 09:02:15.800471 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:02:15.800689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:02:15.801794 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:02:15.802007 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:02:15.803134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:02:15.803299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:02:15.804627 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 09:02:15.804740 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 09:02:15.805869 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:02:15.805991 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:02:15.807212 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:02:15.808549 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 09:02:15.810008 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 09:02:15.813765 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 09:02:15.822579 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 09:02:15.831399 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 09:02:15.833114 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 09:02:15.833956 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 09:02:15.833992 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:02:15.835635 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 09:02:15.837441 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 09:02:15.840481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 09:02:15.841783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:02:15.843140 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 09:02:15.853433 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 09:02:15.854614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:02:15.858442 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 09:02:15.859614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:02:15.860577 systemd-journald[1107]: Time spent on flushing to /var/log/journal/ca6db50074784d519c912d3bc9e72443 is 42.004ms for 854 entries. Jul 2 09:02:15.860577 systemd-journald[1107]: System Journal (/var/log/journal/ca6db50074784d519c912d3bc9e72443) is 8.0M, max 195.6M, 187.6M free. Jul 2 09:02:15.912121 systemd-journald[1107]: Received client request to flush runtime journal. Jul 2 09:02:15.912157 kernel: loop0: detected capacity change from 0 to 113672 Jul 2 09:02:15.912170 kernel: block loop0: the capability attribute has been deprecated. Jul 2 09:02:15.912335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 09:02:15.865229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:02:15.867087 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 09:02:15.870540 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 09:02:15.875380 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:02:15.876786 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 09:02:15.880234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 09:02:15.881537 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 09:02:15.897868 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 09:02:15.899395 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 09:02:15.905710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:02:15.908681 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 09:02:15.910023 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 09:02:15.928760 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 09:02:15.932467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:02:15.935438 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 09:02:15.938883 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 09:02:15.945622 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 09:02:15.946904 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 09:02:15.949325 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 09:02:15.971498 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 2 09:02:15.971520 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 2 09:02:15.975354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:02:15.984290 kernel: loop2: detected capacity change from 0 to 194512 Jul 2 09:02:16.025354 kernel: loop3: detected capacity change from 0 to 113672 Jul 2 09:02:16.030571 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 09:02:16.034312 kernel: loop5: detected capacity change from 0 to 194512 Jul 2 09:02:16.038134 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 09:02:16.038508 (sd-merge)[1186]: Merged extensions into '/usr'. Jul 2 09:02:16.042744 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 09:02:16.042759 systemd[1]: Reloading... Jul 2 09:02:16.095316 zram_generator::config[1210]: No configuration found. Jul 2 09:02:16.115219 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 09:02:16.182748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:02:16.219440 systemd[1]: Reloading finished in 176 ms. Jul 2 09:02:16.246589 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 09:02:16.249725 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 09:02:16.267615 systemd[1]: Starting ensure-sysext.service... Jul 2 09:02:16.269257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:02:16.277713 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jul 2 09:02:16.277731 systemd[1]: Reloading... Jul 2 09:02:16.288357 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 09:02:16.288605 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 09:02:16.289344 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 09:02:16.289563 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 2 09:02:16.289611 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 2 09:02:16.291674 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:02:16.291687 systemd-tmpfiles[1247]: Skipping /boot Jul 2 09:02:16.297938 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:02:16.297952 systemd-tmpfiles[1247]: Skipping /boot Jul 2 09:02:16.330311 zram_generator::config[1272]: No configuration found. Jul 2 09:02:16.398850 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:02:16.435379 systemd[1]: Reloading finished in 157 ms. Jul 2 09:02:16.451144 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 09:02:16.467718 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:02:16.474778 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:02:16.477061 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 09:02:16.479037 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 09:02:16.481517 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:02:16.483532 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:02:16.486552 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 09:02:16.489387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:02:16.490613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:02:16.495436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:02:16.500518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:02:16.501607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:02:16.503939 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 09:02:16.505979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:02:16.506114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:02:16.507829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:02:16.508923 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:02:16.509955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:02:16.510501 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 09:02:16.514866 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:02:16.514982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:02:16.516630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:02:16.516745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:02:16.521763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:02:16.521911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:02:16.523593 systemd[1]: Finished ensure-sysext.service. Jul 2 09:02:16.528958 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:02:16.529116 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:02:16.535416 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 09:02:16.540186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:02:16.540271 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:02:16.546790 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jul 2 09:02:16.549584 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 09:02:16.551411 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 09:02:16.554318 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 09:02:16.556759 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 09:02:16.560655 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:02:16.567058 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 09:02:16.569436 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:02:16.573825 augenrules[1351]: No rules Jul 2 09:02:16.582514 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:02:16.583508 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:02:16.598538 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 09:02:16.612311 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1374) Jul 2 09:02:16.615327 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 09:02:16.616446 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 09:02:16.640667 systemd-resolved[1312]: Positive Trust Anchors: Jul 2 09:02:16.644303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1360) Jul 2 09:02:16.642888 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:02:16.642951 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:02:16.653240 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jul 2 09:02:16.661959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:02:16.665915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:02:16.671476 systemd-networkd[1363]: lo: Link UP Jul 2 09:02:16.671484 systemd-networkd[1363]: lo: Gained carrier Jul 2 09:02:16.672374 systemd-networkd[1363]: Enumeration completed Jul 2 09:02:16.672461 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:02:16.673404 systemd[1]: Reached target network.target - Network. Jul 2 09:02:16.675859 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:02:16.675868 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:02:16.676529 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:02:16.676562 systemd-networkd[1363]: eth0: Link UP Jul 2 09:02:16.676564 systemd-networkd[1363]: eth0: Gained carrier Jul 2 09:02:16.676572 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:02:16.680459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 09:02:16.685616 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:02:16.690351 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 09:02:16.693350 systemd-networkd[1363]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:02:16.693965 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jul 2 09:02:17.161886 systemd-resolved[1312]: Clock change detected. Flushing caches. Jul 2 09:02:17.161950 systemd-timesyncd[1338]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 09:02:17.161995 systemd-timesyncd[1338]: Initial clock synchronization to Tue 2024-07-02 09:02:17.161848 UTC. Jul 2 09:02:17.171743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 09:02:17.190594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:02:17.199419 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 09:02:17.201784 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 09:02:17.219392 lvm[1394]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:02:17.231424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:02:17.241282 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 09:02:17.242824 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:02:17.243705 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:02:17.244571 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 09:02:17.245447 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 09:02:17.246480 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 09:02:17.247343 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 09:02:17.248244 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 09:02:17.249286 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 09:02:17.249324 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:02:17.250001 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:02:17.251334 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 09:02:17.253334 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 09:02:17.262306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 09:02:17.264191 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 09:02:17.265486 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 09:02:17.266319 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:02:17.267054 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:02:17.267808 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:02:17.267840 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:02:17.268709 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 09:02:17.270416 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 09:02:17.271140 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:02:17.273441 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 09:02:17.276749 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 09:02:17.279660 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 09:02:17.281540 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 09:02:17.283466 jq[1405]: false Jul 2 09:02:17.284724 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 09:02:17.286664 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 09:02:17.291161 extend-filesystems[1406]: Found loop3 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found loop4 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found loop5 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found vda Jul 2 09:02:17.292228 extend-filesystems[1406]: Found vda1 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found vda2 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found vda3 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found usr Jul 2 09:02:17.292228 extend-filesystems[1406]: Found vda4 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found vda6 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found vda7 Jul 2 09:02:17.292228 extend-filesystems[1406]: Found vda9 Jul 2 09:02:17.292228 extend-filesystems[1406]: Checking size of /dev/vda9 Jul 2 09:02:17.304769 extend-filesystems[1406]: Resized partition /dev/vda9 Jul 2 09:02:17.312555 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 09:02:17.312600 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1369) Jul 2 09:02:17.292919 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 09:02:17.301093 dbus-daemon[1404]: [system] SELinux support is enabled Jul 2 09:02:17.312895 extend-filesystems[1422]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 09:02:17.303563 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 09:02:17.311696 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 09:02:17.312143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 09:02:17.321523 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 09:02:17.325027 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 09:02:17.327137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 09:02:17.328296 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 09:02:17.330503 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 09:02:17.342468 extend-filesystems[1422]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 09:02:17.342468 extend-filesystems[1422]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 09:02:17.342468 extend-filesystems[1422]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 09:02:17.335796 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 09:02:17.345980 jq[1427]: true Jul 2 09:02:17.346250 extend-filesystems[1406]: Resized filesystem in /dev/vda9 Jul 2 09:02:17.338441 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 09:02:17.338734 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 09:02:17.338873 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 09:02:17.341733 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 09:02:17.341873 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 09:02:17.343754 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 09:02:17.343889 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 09:02:17.354255 (ntainerd)[1433]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 09:02:17.357290 jq[1432]: true Jul 2 09:02:17.377007 tar[1431]: linux-arm64/helm Jul 2 09:02:17.377254 update_engine[1424]: I0702 09:02:17.376788 1424 main.cc:92] Flatcar Update Engine starting Jul 2 09:02:17.377882 systemd-logind[1416]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 09:02:17.381554 systemd-logind[1416]: New seat seat0. Jul 2 09:02:17.382209 update_engine[1424]: I0702 09:02:17.382165 1424 update_check_scheduler.cc:74] Next update check in 8m52s Jul 2 09:02:17.382522 systemd[1]: Started update-engine.service - Update Engine. Jul 2 09:02:17.385771 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 09:02:17.385805 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 09:02:17.387101 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 09:02:17.387124 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 09:02:17.397532 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 09:02:17.398738 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 09:02:17.421660 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Jul 2 09:02:17.423018 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 09:02:17.424952 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 09:02:17.448977 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 09:02:17.560756 containerd[1433]: time="2024-07-02T09:02:17.560659701Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 09:02:17.584751 containerd[1433]: time="2024-07-02T09:02:17.584658661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 09:02:17.584965 containerd[1433]: time="2024-07-02T09:02:17.584838101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:02:17.586223 containerd[1433]: time="2024-07-02T09:02:17.586187701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:02:17.586515 containerd[1433]: time="2024-07-02T09:02:17.586408461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:02:17.586715 containerd[1433]: time="2024-07-02T09:02:17.586690261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:02:17.587564 containerd[1433]: time="2024-07-02T09:02:17.586947621Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 09:02:17.587564 containerd[1433]: time="2024-07-02T09:02:17.587045141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 09:02:17.587564 containerd[1433]: time="2024-07-02T09:02:17.587100661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:02:17.587564 containerd[1433]: time="2024-07-02T09:02:17.587113381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 09:02:17.587564 containerd[1433]: time="2024-07-02T09:02:17.587167741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:02:17.587564 containerd[1433]: time="2024-07-02T09:02:17.587504221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 09:02:17.587564 containerd[1433]: time="2024-07-02T09:02:17.587524221Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 09:02:17.587564 containerd[1433]: time="2024-07-02T09:02:17.587533381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:02:17.587986 containerd[1433]: time="2024-07-02T09:02:17.587959501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:02:17.588138 containerd[1433]: time="2024-07-02T09:02:17.588119141Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 09:02:17.588312 containerd[1433]: time="2024-07-02T09:02:17.588291461Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 09:02:17.588388 containerd[1433]: time="2024-07-02T09:02:17.588359621Z" level=info msg="metadata content store policy set" policy=shared Jul 2 09:02:17.592803 containerd[1433]: time="2024-07-02T09:02:17.592698101Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 09:02:17.592803 containerd[1433]: time="2024-07-02T09:02:17.592734421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 09:02:17.592803 containerd[1433]: time="2024-07-02T09:02:17.592747021Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 09:02:17.592803 containerd[1433]: time="2024-07-02T09:02:17.592777781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593027221Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593050581Z" level=info msg="NRI interface is disabled by configuration." Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593068021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593194381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593212781Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593225461Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593238221Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593251021Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593265901Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593279021Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593291101Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593304701Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593316821Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 09:02:17.594506 containerd[1433]: time="2024-07-02T09:02:17.593328661Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593342181Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593461381Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593735941Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593762461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593775221Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593796381Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593918461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593933581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593946301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593958501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593976101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.593992221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.594003901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.594834 containerd[1433]: time="2024-07-02T09:02:17.594014341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594026861Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594146021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594163061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594175101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594186981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594199261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594212141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594223461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595195 containerd[1433]: time="2024-07-02T09:02:17.594234301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 09:02:17.595409 containerd[1433]: time="2024-07-02T09:02:17.594656341Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 09:02:17.595409 containerd[1433]: time="2024-07-02T09:02:17.594711421Z" level=info msg="Connect containerd service" Jul 2 09:02:17.595409 containerd[1433]: time="2024-07-02T09:02:17.594736141Z" level=info msg="using legacy CRI server" Jul 2 09:02:17.595409 containerd[1433]: time="2024-07-02T09:02:17.594742461Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 09:02:17.595409 containerd[1433]: time="2024-07-02T09:02:17.594870061Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 09:02:17.595671 containerd[1433]: time="2024-07-02T09:02:17.595593181Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:02:17.595671 containerd[1433]: time="2024-07-02T09:02:17.595632461Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 09:02:17.595671 containerd[1433]: time="2024-07-02T09:02:17.595648661Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 09:02:17.595671 containerd[1433]: time="2024-07-02T09:02:17.595658301Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 09:02:17.595671 containerd[1433]: time="2024-07-02T09:02:17.595669661Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 09:02:17.596136 containerd[1433]: time="2024-07-02T09:02:17.596031501Z" level=info msg="Start subscribing containerd event" Jul 2 09:02:17.596274 containerd[1433]: time="2024-07-02T09:02:17.596255141Z" level=info msg="Start recovering state" Jul 2 09:02:17.596422 containerd[1433]: time="2024-07-02T09:02:17.596223421Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 09:02:17.596470 containerd[1433]: time="2024-07-02T09:02:17.596455181Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 09:02:17.596549 containerd[1433]: time="2024-07-02T09:02:17.596530821Z" level=info msg="Start event monitor" Jul 2 09:02:17.596655 containerd[1433]: time="2024-07-02T09:02:17.596638501Z" level=info msg="Start snapshots syncer" Jul 2 09:02:17.596729 containerd[1433]: time="2024-07-02T09:02:17.596715581Z" level=info msg="Start cni network conf syncer for default" Jul 2 09:02:17.596964 containerd[1433]: time="2024-07-02T09:02:17.596831741Z" level=info msg="Start streaming server" Jul 2 09:02:17.598516 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 09:02:17.599881 containerd[1433]: time="2024-07-02T09:02:17.599842781Z" level=info msg="containerd successfully booted in 0.039956s" Jul 2 09:02:17.717870 tar[1431]: linux-arm64/LICENSE Jul 2 09:02:17.717870 tar[1431]: linux-arm64/README.md Jul 2 09:02:17.736418 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 09:02:18.390407 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 09:02:18.408642 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 09:02:18.416706 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 09:02:18.421779 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 09:02:18.421956 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 09:02:18.424352 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 09:02:18.436978 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 09:02:18.451672 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 09:02:18.453716 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 09:02:18.455002 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 09:02:18.486481 systemd-networkd[1363]: eth0: Gained IPv6LL Jul 2 09:02:18.489475 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 09:02:18.490772 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 09:02:18.501574 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 09:02:18.505446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:02:18.507189 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 09:02:18.522118 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 09:02:18.523425 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 09:02:18.525049 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 09:02:18.528497 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 09:02:18.995155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:02:18.996603 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 09:02:18.998887 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:02:18.999157 systemd[1]: Startup finished in 551ms (kernel) + 4.567s (initrd) + 3.296s (userspace) = 8.415s. Jul 2 09:02:19.464766 kubelet[1516]: E0702 09:02:19.464628 1516 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:02:19.467670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:02:19.467819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:02:24.271896 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 09:02:24.272944 systemd[1]: Started sshd@0-10.0.0.47:22-10.0.0.1:42470.service - OpenSSH per-connection server daemon (10.0.0.1:42470). Jul 2 09:02:24.326059 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 42470 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:02:24.331247 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:02:24.341283 systemd-logind[1416]: New session 1 of user core. Jul 2 09:02:24.342273 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 09:02:24.353589 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 09:02:24.362721 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 09:02:24.365705 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 09:02:24.371230 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:02:24.454273 systemd[1534]: Queued start job for default target default.target. Jul 2 09:02:24.462265 systemd[1534]: Created slice app.slice - User Application Slice. Jul 2 09:02:24.462295 systemd[1534]: Reached target paths.target - Paths. Jul 2 09:02:24.462307 systemd[1534]: Reached target timers.target - Timers. Jul 2 09:02:24.463522 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 09:02:24.473340 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 09:02:24.473420 systemd[1534]: Reached target sockets.target - Sockets. Jul 2 09:02:24.473432 systemd[1534]: Reached target basic.target - Basic System. Jul 2 09:02:24.473466 systemd[1534]: Reached target default.target - Main User Target. Jul 2 09:02:24.473491 systemd[1534]: Startup finished in 97ms. Jul 2 09:02:24.473757 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 09:02:24.475343 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 09:02:24.538831 systemd[1]: Started sshd@1-10.0.0.47:22-10.0.0.1:42484.service - OpenSSH per-connection server daemon (10.0.0.1:42484). Jul 2 09:02:24.585954 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 42484 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:02:24.587186 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:02:24.591036 systemd-logind[1416]: New session 2 of user core. Jul 2 09:02:24.598591 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 09:02:24.650348 sshd[1545]: pam_unix(sshd:session): session closed for user core Jul 2 09:02:24.665659 systemd[1]: sshd@1-10.0.0.47:22-10.0.0.1:42484.service: Deactivated successfully. Jul 2 09:02:24.667727 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 09:02:24.669020 systemd-logind[1416]: Session 2 logged out. Waiting for processes to exit. Jul 2 09:02:24.679726 systemd[1]: Started sshd@2-10.0.0.47:22-10.0.0.1:42494.service - OpenSSH per-connection server daemon (10.0.0.1:42494). Jul 2 09:02:24.680564 systemd-logind[1416]: Removed session 2. Jul 2 09:02:24.712320 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 42494 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:02:24.713482 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:02:24.717427 systemd-logind[1416]: New session 3 of user core. Jul 2 09:02:24.728513 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 09:02:24.775703 sshd[1552]: pam_unix(sshd:session): session closed for user core Jul 2 09:02:24.784658 systemd[1]: sshd@2-10.0.0.47:22-10.0.0.1:42494.service: Deactivated successfully. Jul 2 09:02:24.786594 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 09:02:24.787833 systemd-logind[1416]: Session 3 logged out. Waiting for processes to exit. Jul 2 09:02:24.788926 systemd[1]: Started sshd@3-10.0.0.47:22-10.0.0.1:42500.service - OpenSSH per-connection server daemon (10.0.0.1:42500). Jul 2 09:02:24.790577 systemd-logind[1416]: Removed session 3. Jul 2 09:02:24.825560 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 42500 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:02:24.826778 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:02:24.830849 systemd-logind[1416]: New session 4 of user core. Jul 2 09:02:24.839592 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 09:02:24.890586 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 2 09:02:24.908524 systemd[1]: sshd@3-10.0.0.47:22-10.0.0.1:42500.service: Deactivated successfully. Jul 2 09:02:24.909778 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 09:02:24.912394 systemd-logind[1416]: Session 4 logged out. Waiting for processes to exit. Jul 2 09:02:24.913477 systemd[1]: Started sshd@4-10.0.0.47:22-10.0.0.1:42508.service - OpenSSH per-connection server daemon (10.0.0.1:42508). Jul 2 09:02:24.914177 systemd-logind[1416]: Removed session 4. Jul 2 09:02:24.949915 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 42508 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:02:24.951024 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:02:24.954740 systemd-logind[1416]: New session 5 of user core. Jul 2 09:02:24.960503 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 09:02:25.020716 sudo[1569]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 09:02:25.020976 sudo[1569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:02:25.036164 sudo[1569]: pam_unix(sudo:session): session closed for user root Jul 2 09:02:25.037941 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 2 09:02:25.047694 systemd[1]: sshd@4-10.0.0.47:22-10.0.0.1:42508.service: Deactivated successfully. Jul 2 09:02:25.049774 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 09:02:25.051104 systemd-logind[1416]: Session 5 logged out. Waiting for processes to exit. Jul 2 09:02:25.061651 systemd[1]: Started sshd@5-10.0.0.47:22-10.0.0.1:42524.service - OpenSSH per-connection server daemon (10.0.0.1:42524). Jul 2 09:02:25.062453 systemd-logind[1416]: Removed session 5. Jul 2 09:02:25.095662 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 42524 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:02:25.097229 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:02:25.100717 systemd-logind[1416]: New session 6 of user core. Jul 2 09:02:25.113535 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 09:02:25.163686 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 09:02:25.163945 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:02:25.166972 sudo[1578]: pam_unix(sudo:session): session closed for user root Jul 2 09:02:25.171274 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 09:02:25.171528 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:02:25.185688 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 09:02:25.186830 auditctl[1581]: No rules Jul 2 09:02:25.187657 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 09:02:25.187900 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 09:02:25.189451 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:02:25.211856 augenrules[1599]: No rules Jul 2 09:02:25.213011 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:02:25.214160 sudo[1577]: pam_unix(sudo:session): session closed for user root Jul 2 09:02:25.216345 sshd[1574]: pam_unix(sshd:session): session closed for user core Jul 2 09:02:25.229682 systemd[1]: sshd@5-10.0.0.47:22-10.0.0.1:42524.service: Deactivated successfully. Jul 2 09:02:25.231034 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 09:02:25.233080 systemd-logind[1416]: Session 6 logged out. Waiting for processes to exit. Jul 2 09:02:25.242651 systemd[1]: Started sshd@6-10.0.0.47:22-10.0.0.1:42540.service - OpenSSH per-connection server daemon (10.0.0.1:42540). Jul 2 09:02:25.243737 systemd-logind[1416]: Removed session 6. Jul 2 09:02:25.276150 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 42540 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:02:25.277462 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:02:25.281321 systemd-logind[1416]: New session 7 of user core. Jul 2 09:02:25.289579 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 09:02:25.339410 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 09:02:25.340521 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:02:25.445623 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 09:02:25.445693 (dockerd)[1620]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 09:02:25.680317 dockerd[1620]: time="2024-07-02T09:02:25.680197861Z" level=info msg="Starting up" Jul 2 09:02:25.771026 dockerd[1620]: time="2024-07-02T09:02:25.770985021Z" level=info msg="Loading containers: start." Jul 2 09:02:25.850431 kernel: Initializing XFRM netlink socket Jul 2 09:02:25.912571 systemd-networkd[1363]: docker0: Link UP Jul 2 09:02:25.929679 dockerd[1620]: time="2024-07-02T09:02:25.929636781Z" level=info msg="Loading containers: done." Jul 2 09:02:25.988185 dockerd[1620]: time="2024-07-02T09:02:25.988065461Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 09:02:25.988309 dockerd[1620]: time="2024-07-02T09:02:25.988271301Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 09:02:25.988544 dockerd[1620]: time="2024-07-02T09:02:25.988407101Z" level=info msg="Daemon has completed initialization" Jul 2 09:02:26.015819 dockerd[1620]: time="2024-07-02T09:02:26.015742621Z" level=info msg="API listen on /run/docker.sock" Jul 2 09:02:26.015988 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 09:02:26.599118 containerd[1433]: time="2024-07-02T09:02:26.598844701Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 09:02:27.218037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785770991.mount: Deactivated successfully. Jul 2 09:02:28.553827 containerd[1433]: time="2024-07-02T09:02:28.553780901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:28.559233 containerd[1433]: time="2024-07-02T09:02:28.559199381Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256349" Jul 2 09:02:28.561506 containerd[1433]: time="2024-07-02T09:02:28.560418421Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:28.564834 containerd[1433]: time="2024-07-02T09:02:28.564803141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:28.566090 containerd[1433]: time="2024-07-02T09:02:28.566034941Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 1.9671378s" Jul 2 09:02:28.566090 containerd[1433]: time="2024-07-02T09:02:28.566072861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 09:02:28.584461 containerd[1433]: time="2024-07-02T09:02:28.584431541Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 09:02:29.645460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 09:02:29.654553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:02:29.746582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:02:29.749294 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:02:29.789761 kubelet[1835]: E0702 09:02:29.789709 1835 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:02:29.794924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:02:29.795073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:02:30.208460 containerd[1433]: time="2024-07-02T09:02:30.208408461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:30.209022 containerd[1433]: time="2024-07-02T09:02:30.208988861Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228086" Jul 2 09:02:30.209879 containerd[1433]: time="2024-07-02T09:02:30.209832381Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:30.212597 containerd[1433]: time="2024-07-02T09:02:30.212571741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:30.213710 containerd[1433]: time="2024-07-02T09:02:30.213672141Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 1.629205s" Jul 2 09:02:30.213710 containerd[1433]: time="2024-07-02T09:02:30.213705501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 09:02:30.233676 containerd[1433]: time="2024-07-02T09:02:30.233483861Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 09:02:31.254252 containerd[1433]: time="2024-07-02T09:02:31.254192181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:31.254810 containerd[1433]: time="2024-07-02T09:02:31.254778781Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578350" Jul 2 09:02:31.255446 containerd[1433]: time="2024-07-02T09:02:31.255422141Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:31.258924 containerd[1433]: time="2024-07-02T09:02:31.258351581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:31.259611 containerd[1433]: time="2024-07-02T09:02:31.259587981Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 1.02606988s" Jul 2 09:02:31.259803 containerd[1433]: time="2024-07-02T09:02:31.259699901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 09:02:31.277162 containerd[1433]: time="2024-07-02T09:02:31.277133861Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 09:02:32.234545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821672960.mount: Deactivated successfully. Jul 2 09:02:32.574004 containerd[1433]: time="2024-07-02T09:02:32.573881861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:32.575347 containerd[1433]: time="2024-07-02T09:02:32.575231301Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052712" Jul 2 09:02:32.576070 containerd[1433]: time="2024-07-02T09:02:32.576027101Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:32.578322 containerd[1433]: time="2024-07-02T09:02:32.578273901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:32.578973 containerd[1433]: time="2024-07-02T09:02:32.578936581Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 1.30176544s" Jul 2 09:02:32.579020 containerd[1433]: time="2024-07-02T09:02:32.578973301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 09:02:32.597115 containerd[1433]: time="2024-07-02T09:02:32.597082061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 09:02:33.230361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501688017.mount: Deactivated successfully. Jul 2 09:02:33.899297 containerd[1433]: time="2024-07-02T09:02:33.899242141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:33.900089 containerd[1433]: time="2024-07-02T09:02:33.900060181Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jul 2 09:02:33.901080 containerd[1433]: time="2024-07-02T09:02:33.901038461Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:33.903936 containerd[1433]: time="2024-07-02T09:02:33.903903901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:33.905160 containerd[1433]: time="2024-07-02T09:02:33.905097301Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.30798184s" Jul 2 09:02:33.905160 containerd[1433]: time="2024-07-02T09:02:33.905134381Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 09:02:33.924024 containerd[1433]: time="2024-07-02T09:02:33.923996941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 09:02:34.333844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507691149.mount: Deactivated successfully. Jul 2 09:02:34.337723 containerd[1433]: time="2024-07-02T09:02:34.337687261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:34.338841 containerd[1433]: time="2024-07-02T09:02:34.338800741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 09:02:34.339655 containerd[1433]: time="2024-07-02T09:02:34.339613221Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:34.341935 containerd[1433]: time="2024-07-02T09:02:34.341873181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:34.343318 containerd[1433]: time="2024-07-02T09:02:34.342924501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 418.7874ms" Jul 2 09:02:34.343318 containerd[1433]: time="2024-07-02T09:02:34.342956741Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 09:02:34.362675 containerd[1433]: time="2024-07-02T09:02:34.362577941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 09:02:34.841301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818468416.mount: Deactivated successfully. Jul 2 09:02:36.516334 containerd[1433]: time="2024-07-02T09:02:36.516259821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:36.517111 containerd[1433]: time="2024-07-02T09:02:36.517022541Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 09:02:36.517689 containerd[1433]: time="2024-07-02T09:02:36.517655981Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:36.521199 containerd[1433]: time="2024-07-02T09:02:36.521157861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:02:36.523656 containerd[1433]: time="2024-07-02T09:02:36.523597461Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.1609836s" Jul 2 09:02:36.523656 containerd[1433]: time="2024-07-02T09:02:36.523638941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 09:02:39.895478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 09:02:39.904850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:02:39.992202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:02:39.995646 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:02:40.031549 kubelet[2065]: E0702 09:02:40.031492 2065 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:02:40.034415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:02:40.034553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:02:40.425013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:02:40.435697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:02:40.455461 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-7.scope)... Jul 2 09:02:40.455480 systemd[1]: Reloading... Jul 2 09:02:40.521431 zram_generator::config[2117]: No configuration found. Jul 2 09:02:40.615749 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:02:40.668911 systemd[1]: Reloading finished in 213 ms. Jul 2 09:02:40.710294 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 09:02:40.710356 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 09:02:40.710604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:02:40.712789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:02:40.806184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:02:40.809979 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:02:40.849359 kubelet[2163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:02:40.849359 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:02:40.849359 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:02:40.849663 kubelet[2163]: I0702 09:02:40.849420 2163 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:02:41.493601 kubelet[2163]: I0702 09:02:41.493551 2163 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 09:02:41.493601 kubelet[2163]: I0702 09:02:41.493581 2163 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:02:41.493812 kubelet[2163]: I0702 09:02:41.493766 2163 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 09:02:41.537695 kubelet[2163]: E0702 09:02:41.537669 2163 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.537958 kubelet[2163]: I0702 09:02:41.537876 2163 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:02:41.773544 kubelet[2163]: I0702 09:02:41.773423 2163 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:02:41.774268 kubelet[2163]: I0702 09:02:41.774228 2163 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:02:41.774483 kubelet[2163]: I0702 09:02:41.774453 2163 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:02:41.774483 kubelet[2163]: I0702 09:02:41.774480 2163 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:02:41.774586 kubelet[2163]: I0702 09:02:41.774489 2163 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:02:41.775584 kubelet[2163]: I0702 09:02:41.775544 2163 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:02:41.777725 kubelet[2163]: I0702 09:02:41.777667 2163 kubelet.go:396] "Attempting to sync node with API server" Jul 2 09:02:41.777725 kubelet[2163]: I0702 09:02:41.777709 2163 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:02:41.777725 kubelet[2163]: I0702 09:02:41.777729 2163 kubelet.go:312] "Adding apiserver pod source" Jul 2 09:02:41.777725 kubelet[2163]: I0702 09:02:41.777739 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:02:41.779272 kubelet[2163]: W0702 09:02:41.778472 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.779272 kubelet[2163]: E0702 09:02:41.778533 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.779562 kubelet[2163]: W0702 09:02:41.779494 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.779562 kubelet[2163]: E0702 09:02:41.779533 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.779742 kubelet[2163]: I0702 09:02:41.779718 2163 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:02:41.780389 kubelet[2163]: I0702 09:02:41.780303 2163 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 09:02:41.780514 kubelet[2163]: W0702 09:02:41.780500 2163 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 09:02:41.781657 kubelet[2163]: I0702 09:02:41.781619 2163 server.go:1256] "Started kubelet" Jul 2 09:02:41.783044 kubelet[2163]: I0702 09:02:41.781813 2163 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:02:41.783044 kubelet[2163]: I0702 09:02:41.781964 2163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 09:02:41.783044 kubelet[2163]: I0702 09:02:41.782154 2163 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:02:41.783996 kubelet[2163]: I0702 09:02:41.783664 2163 server.go:461] "Adding debug handlers to kubelet server" Jul 2 09:02:41.785310 kubelet[2163]: I0702 09:02:41.784281 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:02:41.791055 kubelet[2163]: I0702 09:02:41.789853 2163 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:02:41.791055 kubelet[2163]: I0702 09:02:41.789928 2163 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:02:41.791055 kubelet[2163]: I0702 09:02:41.789974 2163 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:02:41.791055 kubelet[2163]: W0702 09:02:41.790212 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.791055 kubelet[2163]: E0702 09:02:41.790249 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.791055 kubelet[2163]: E0702 09:02:41.790639 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="200ms" Jul 2 09:02:41.791055 kubelet[2163]: E0702 09:02:41.790975 2163 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de59e8424aee9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 09:02:41.781599901 +0000 UTC m=+0.968597761,LastTimestamp:2024-07-02 09:02:41.781599901 +0000 UTC m=+0.968597761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 09:02:41.792135 kubelet[2163]: I0702 09:02:41.791487 2163 factory.go:221] Registration of the systemd container factory successfully Jul 2 09:02:41.792135 kubelet[2163]: I0702 09:02:41.791557 2163 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 09:02:41.793294 kubelet[2163]: E0702 09:02:41.793080 2163 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:02:41.793426 kubelet[2163]: I0702 09:02:41.793387 2163 factory.go:221] Registration of the containerd container factory successfully Jul 2 09:02:41.801733 kubelet[2163]: I0702 09:02:41.801707 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:02:41.803378 kubelet[2163]: I0702 09:02:41.802784 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:02:41.803378 kubelet[2163]: I0702 09:02:41.802821 2163 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:02:41.803378 kubelet[2163]: I0702 09:02:41.802837 2163 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 09:02:41.803378 kubelet[2163]: E0702 09:02:41.802906 2163 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:02:41.803703 kubelet[2163]: W0702 09:02:41.803669 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.803755 kubelet[2163]: E0702 09:02:41.803713 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:41.810755 kubelet[2163]: I0702 09:02:41.810732 2163 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:02:41.810755 kubelet[2163]: I0702 09:02:41.810754 2163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:02:41.810889 kubelet[2163]: I0702 09:02:41.810771 2163 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:02:41.850011 kubelet[2163]: I0702 09:02:41.849959 2163 policy_none.go:49] "None policy: Start" Jul 2 09:02:41.850654 kubelet[2163]: I0702 09:02:41.850621 2163 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 09:02:41.850696 kubelet[2163]: I0702 09:02:41.850667 2163 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:02:41.870442 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 09:02:41.890129 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 09:02:41.891401 kubelet[2163]: I0702 09:02:41.891363 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:02:41.892006 kubelet[2163]: E0702 09:02:41.891756 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jul 2 09:02:41.894261 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 09:02:41.903155 kubelet[2163]: E0702 09:02:41.903120 2163 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 09:02:41.908256 kubelet[2163]: I0702 09:02:41.908153 2163 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:02:41.908452 kubelet[2163]: I0702 09:02:41.908432 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:02:41.910254 kubelet[2163]: E0702 09:02:41.910234 2163 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 09:02:41.991115 kubelet[2163]: E0702 09:02:41.991063 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="400ms" Jul 2 09:02:42.093581 kubelet[2163]: I0702 09:02:42.093485 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:02:42.093828 kubelet[2163]: E0702 09:02:42.093784 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jul 2 09:02:42.103937 kubelet[2163]: I0702 09:02:42.103888 2163 topology_manager.go:215] "Topology Admit Handler" podUID="7a55cd9926b68b714669c2037fc9f0de" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:02:42.104741 kubelet[2163]: I0702 09:02:42.104714 2163 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:02:42.105767 kubelet[2163]: I0702 09:02:42.105737 2163 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:02:42.111074 systemd[1]: Created slice kubepods-burstable-pod7a55cd9926b68b714669c2037fc9f0de.slice - libcontainer container kubepods-burstable-pod7a55cd9926b68b714669c2037fc9f0de.slice. Jul 2 09:02:42.124707 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice - libcontainer container kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jul 2 09:02:42.128001 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice - libcontainer container kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jul 2 09:02:42.192836 kubelet[2163]: I0702 09:02:42.192523 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:02:42.192836 kubelet[2163]: I0702 09:02:42.192557 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a55cd9926b68b714669c2037fc9f0de-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a55cd9926b68b714669c2037fc9f0de\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:02:42.192836 kubelet[2163]: I0702 09:02:42.192580 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a55cd9926b68b714669c2037fc9f0de-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a55cd9926b68b714669c2037fc9f0de\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:02:42.192836 kubelet[2163]: I0702 09:02:42.192601 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a55cd9926b68b714669c2037fc9f0de-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a55cd9926b68b714669c2037fc9f0de\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:02:42.192836 kubelet[2163]: I0702 09:02:42.192623 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:42.192992 kubelet[2163]: I0702 09:02:42.192644 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:42.192992 kubelet[2163]: I0702 09:02:42.192662 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:42.192992 kubelet[2163]: I0702 09:02:42.192682 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:42.192992 kubelet[2163]: I0702 09:02:42.192701 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:42.392499 kubelet[2163]: E0702 09:02:42.392406 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="800ms" Jul 2 09:02:42.425638 kubelet[2163]: E0702 09:02:42.425484 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:42.426411 containerd[1433]: time="2024-07-02T09:02:42.426171901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a55cd9926b68b714669c2037fc9f0de,Namespace:kube-system,Attempt:0,}" Jul 2 09:02:42.426691 kubelet[2163]: E0702 09:02:42.426498 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:42.427080 containerd[1433]: time="2024-07-02T09:02:42.426889141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jul 2 09:02:42.430343 kubelet[2163]: E0702 09:02:42.430316 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:42.430674 containerd[1433]: time="2024-07-02T09:02:42.430641341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jul 2 09:02:42.495723 kubelet[2163]: I0702 09:02:42.495359 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:02:42.495723 kubelet[2163]: E0702 09:02:42.495688 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jul 2 09:02:42.887120 kubelet[2163]: W0702 09:02:42.887058 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:42.887120 kubelet[2163]: E0702 09:02:42.887098 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:42.905580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913107204.mount: Deactivated successfully. Jul 2 09:02:42.910219 containerd[1433]: time="2024-07-02T09:02:42.910125541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:02:42.910997 containerd[1433]: time="2024-07-02T09:02:42.910968821Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:02:42.911711 containerd[1433]: time="2024-07-02T09:02:42.911666461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:02:42.912456 containerd[1433]: time="2024-07-02T09:02:42.912426581Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:02:42.913428 containerd[1433]: time="2024-07-02T09:02:42.913396461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:02:42.913915 containerd[1433]: time="2024-07-02T09:02:42.913875341Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:02:42.914611 containerd[1433]: time="2024-07-02T09:02:42.914587381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 09:02:42.915471 containerd[1433]: time="2024-07-02T09:02:42.915440701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:02:42.918230 containerd[1433]: time="2024-07-02T09:02:42.918196981Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 487.48268ms" Jul 2 09:02:42.919849 containerd[1433]: time="2024-07-02T09:02:42.919751621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.48216ms" Jul 2 09:02:42.921895 containerd[1433]: time="2024-07-02T09:02:42.921870461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.91176ms" Jul 2 09:02:43.020405 kubelet[2163]: W0702 09:02:43.020109 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:43.020405 kubelet[2163]: E0702 09:02:43.020177 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:43.071706 containerd[1433]: time="2024-07-02T09:02:43.071471261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:02:43.071706 containerd[1433]: time="2024-07-02T09:02:43.071520341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:02:43.071706 containerd[1433]: time="2024-07-02T09:02:43.071538781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:02:43.071706 containerd[1433]: time="2024-07-02T09:02:43.071552341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:02:43.072497 containerd[1433]: time="2024-07-02T09:02:43.072324901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:02:43.072497 containerd[1433]: time="2024-07-02T09:02:43.072387541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:02:43.072497 containerd[1433]: time="2024-07-02T09:02:43.072406141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:02:43.072497 containerd[1433]: time="2024-07-02T09:02:43.072419741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:02:43.073335 containerd[1433]: time="2024-07-02T09:02:43.073246661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:02:43.073396 containerd[1433]: time="2024-07-02T09:02:43.073292141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:02:43.073396 containerd[1433]: time="2024-07-02T09:02:43.073406421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:02:43.073556 containerd[1433]: time="2024-07-02T09:02:43.073441101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:02:43.100562 systemd[1]: Started cri-containerd-8120b668ef0bf1577bcec4f76ec746d2b60f25c5e7d25a4b5c4bedb3eb8f722f.scope - libcontainer container 8120b668ef0bf1577bcec4f76ec746d2b60f25c5e7d25a4b5c4bedb3eb8f722f. Jul 2 09:02:43.101865 systemd[1]: Started cri-containerd-96b44a0f3cbf328b4753d277754f2ad240d1af1cb0e46f115e064964c7f3307e.scope - libcontainer container 96b44a0f3cbf328b4753d277754f2ad240d1af1cb0e46f115e064964c7f3307e. Jul 2 09:02:43.103059 systemd[1]: Started cri-containerd-ecd944fde0acdd06756b9bf4b691a9caa0696d6c15d6df47ab9b5491b3c1dd94.scope - libcontainer container ecd944fde0acdd06756b9bf4b691a9caa0696d6c15d6df47ab9b5491b3c1dd94. Jul 2 09:02:43.134709 kubelet[2163]: W0702 09:02:43.134659 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:43.134709 kubelet[2163]: E0702 09:02:43.134715 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:43.135737 containerd[1433]: time="2024-07-02T09:02:43.135513541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8120b668ef0bf1577bcec4f76ec746d2b60f25c5e7d25a4b5c4bedb3eb8f722f\"" Jul 2 09:02:43.138284 kubelet[2163]: E0702 09:02:43.137546 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:43.138350 containerd[1433]: time="2024-07-02T09:02:43.137309141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a55cd9926b68b714669c2037fc9f0de,Namespace:kube-system,Attempt:0,} returns sandbox id \"96b44a0f3cbf328b4753d277754f2ad240d1af1cb0e46f115e064964c7f3307e\"" Jul 2 09:02:43.138804 kubelet[2163]: E0702 09:02:43.138757 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:43.141128 containerd[1433]: time="2024-07-02T09:02:43.141049581Z" level=info msg="CreateContainer within sandbox \"96b44a0f3cbf328b4753d277754f2ad240d1af1cb0e46f115e064964c7f3307e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 09:02:43.141197 containerd[1433]: time="2024-07-02T09:02:43.141147981Z" level=info msg="CreateContainer within sandbox \"8120b668ef0bf1577bcec4f76ec746d2b60f25c5e7d25a4b5c4bedb3eb8f722f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 09:02:43.145196 containerd[1433]: time="2024-07-02T09:02:43.145165901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecd944fde0acdd06756b9bf4b691a9caa0696d6c15d6df47ab9b5491b3c1dd94\"" Jul 2 09:02:43.145711 kubelet[2163]: E0702 09:02:43.145694 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:43.147412 containerd[1433]: time="2024-07-02T09:02:43.147349181Z" level=info msg="CreateContainer within sandbox \"ecd944fde0acdd06756b9bf4b691a9caa0696d6c15d6df47ab9b5491b3c1dd94\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 09:02:43.157055 containerd[1433]: time="2024-07-02T09:02:43.157010181Z" level=info msg="CreateContainer within sandbox \"96b44a0f3cbf328b4753d277754f2ad240d1af1cb0e46f115e064964c7f3307e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"96800ea6cdc1ed455fbaad8ac726771038882ac4d4171c86fb65b2b15472fc6f\"" Jul 2 09:02:43.157717 containerd[1433]: time="2024-07-02T09:02:43.157688981Z" level=info msg="StartContainer for \"96800ea6cdc1ed455fbaad8ac726771038882ac4d4171c86fb65b2b15472fc6f\"" Jul 2 09:02:43.161889 containerd[1433]: time="2024-07-02T09:02:43.161847301Z" level=info msg="CreateContainer within sandbox \"8120b668ef0bf1577bcec4f76ec746d2b60f25c5e7d25a4b5c4bedb3eb8f722f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b0287d14b0d2cb0e1c7438cf6656aa23faa1760210c19e3ddba44d42f601c5f\"" Jul 2 09:02:43.162338 containerd[1433]: time="2024-07-02T09:02:43.162313421Z" level=info msg="StartContainer for \"4b0287d14b0d2cb0e1c7438cf6656aa23faa1760210c19e3ddba44d42f601c5f\"" Jul 2 09:02:43.164041 containerd[1433]: time="2024-07-02T09:02:43.163918861Z" level=info msg="CreateContainer within sandbox \"ecd944fde0acdd06756b9bf4b691a9caa0696d6c15d6df47ab9b5491b3c1dd94\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c04655b1b3ca0e83ff7551dcf8f72b71a09b357e735c4574201013237f15a3f6\"" Jul 2 09:02:43.164486 containerd[1433]: time="2024-07-02T09:02:43.164445741Z" level=info msg="StartContainer for \"c04655b1b3ca0e83ff7551dcf8f72b71a09b357e735c4574201013237f15a3f6\"" Jul 2 09:02:43.186537 systemd[1]: Started cri-containerd-4b0287d14b0d2cb0e1c7438cf6656aa23faa1760210c19e3ddba44d42f601c5f.scope - libcontainer container 4b0287d14b0d2cb0e1c7438cf6656aa23faa1760210c19e3ddba44d42f601c5f. Jul 2 09:02:43.187674 systemd[1]: Started cri-containerd-96800ea6cdc1ed455fbaad8ac726771038882ac4d4171c86fb65b2b15472fc6f.scope - libcontainer container 96800ea6cdc1ed455fbaad8ac726771038882ac4d4171c86fb65b2b15472fc6f. Jul 2 09:02:43.190981 systemd[1]: Started cri-containerd-c04655b1b3ca0e83ff7551dcf8f72b71a09b357e735c4574201013237f15a3f6.scope - libcontainer container c04655b1b3ca0e83ff7551dcf8f72b71a09b357e735c4574201013237f15a3f6. Jul 2 09:02:43.193097 kubelet[2163]: E0702 09:02:43.193068 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="1.6s" Jul 2 09:02:43.243500 containerd[1433]: time="2024-07-02T09:02:43.243282501Z" level=info msg="StartContainer for \"4b0287d14b0d2cb0e1c7438cf6656aa23faa1760210c19e3ddba44d42f601c5f\" returns successfully" Jul 2 09:02:43.243500 containerd[1433]: time="2024-07-02T09:02:43.243417981Z" level=info msg="StartContainer for \"c04655b1b3ca0e83ff7551dcf8f72b71a09b357e735c4574201013237f15a3f6\" returns successfully" Jul 2 09:02:43.243500 containerd[1433]: time="2024-07-02T09:02:43.243440581Z" level=info msg="StartContainer for \"96800ea6cdc1ed455fbaad8ac726771038882ac4d4171c86fb65b2b15472fc6f\" returns successfully" Jul 2 09:02:43.297243 kubelet[2163]: I0702 09:02:43.296925 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:02:43.297243 kubelet[2163]: E0702 09:02:43.297213 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jul 2 09:02:43.345250 kubelet[2163]: W0702 09:02:43.345156 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:43.345250 kubelet[2163]: E0702 09:02:43.345212 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jul 2 09:02:43.814406 kubelet[2163]: E0702 09:02:43.813322 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:43.815322 kubelet[2163]: E0702 09:02:43.815062 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:43.816674 kubelet[2163]: E0702 09:02:43.816654 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:44.821179 kubelet[2163]: E0702 09:02:44.821115 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:44.821179 kubelet[2163]: E0702 09:02:44.821155 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:44.898986 kubelet[2163]: I0702 09:02:44.898924 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:02:45.483967 kubelet[2163]: E0702 09:02:45.483937 2163 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 09:02:45.563294 kubelet[2163]: I0702 09:02:45.563244 2163 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 09:02:45.615100 kubelet[2163]: E0702 09:02:45.615069 2163 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 2 09:02:45.615632 kubelet[2163]: E0702 09:02:45.615570 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:45.781916 kubelet[2163]: I0702 09:02:45.781549 2163 apiserver.go:52] "Watching apiserver" Jul 2 09:02:45.790746 kubelet[2163]: I0702 09:02:45.790705 2163 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:02:45.820191 kubelet[2163]: E0702 09:02:45.820167 2163 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 09:02:45.820643 kubelet[2163]: E0702 09:02:45.820619 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:47.937313 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Jul 2 09:02:47.937329 systemd[1]: Reloading... Jul 2 09:02:48.000424 zram_generator::config[2482]: No configuration found. Jul 2 09:02:48.125356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:02:48.190545 systemd[1]: Reloading finished in 252 ms. Jul 2 09:02:48.231465 kubelet[2163]: I0702 09:02:48.231428 2163 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:02:48.231684 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:02:48.235354 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:02:48.235637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:02:48.235753 systemd[1]: kubelet.service: Consumed 1.144s CPU time, 116.0M memory peak, 0B memory swap peak. Jul 2 09:02:48.240647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:02:48.327798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:02:48.331487 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:02:48.372963 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:02:48.372963 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:02:48.372963 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:02:48.373287 kubelet[2521]: I0702 09:02:48.373014 2521 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:02:48.378540 kubelet[2521]: I0702 09:02:48.378504 2521 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 09:02:48.378540 kubelet[2521]: I0702 09:02:48.378534 2521 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:02:48.378705 kubelet[2521]: I0702 09:02:48.378688 2521 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 09:02:48.380365 kubelet[2521]: I0702 09:02:48.380336 2521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 09:02:48.383235 kubelet[2521]: I0702 09:02:48.382936 2521 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:02:48.388204 kubelet[2521]: I0702 09:02:48.388175 2521 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:02:48.388405 kubelet[2521]: I0702 09:02:48.388394 2521 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:02:48.388586 kubelet[2521]: I0702 09:02:48.388553 2521 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:02:48.388586 kubelet[2521]: I0702 09:02:48.388578 2521 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:02:48.388586 kubelet[2521]: I0702 09:02:48.388587 2521 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:02:48.388772 kubelet[2521]: I0702 09:02:48.388617 2521 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:02:48.388772 kubelet[2521]: I0702 09:02:48.388710 2521 kubelet.go:396] "Attempting to sync node with API server" Jul 2 09:02:48.388772 kubelet[2521]: I0702 09:02:48.388723 2521 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:02:48.388772 kubelet[2521]: I0702 09:02:48.388742 2521 kubelet.go:312] "Adding apiserver pod source" Jul 2 09:02:48.388772 kubelet[2521]: I0702 09:02:48.388766 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:02:48.389666 kubelet[2521]: I0702 09:02:48.389640 2521 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:02:48.390524 kubelet[2521]: I0702 09:02:48.389810 2521 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 09:02:48.391099 kubelet[2521]: I0702 09:02:48.391085 2521 server.go:1256] "Started kubelet" Jul 2 09:02:48.391939 kubelet[2521]: I0702 09:02:48.391641 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 09:02:48.391939 kubelet[2521]: I0702 09:02:48.391842 2521 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:02:48.391939 kubelet[2521]: I0702 09:02:48.391891 2521 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:02:48.392626 kubelet[2521]: I0702 09:02:48.392597 2521 server.go:461] "Adding debug handlers to kubelet server" Jul 2 09:02:48.393830 kubelet[2521]: I0702 09:02:48.393806 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:02:48.394045 kubelet[2521]: I0702 09:02:48.394023 2521 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:02:48.394109 kubelet[2521]: I0702 09:02:48.394096 2521 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:02:48.394238 kubelet[2521]: I0702 09:02:48.394221 2521 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:02:48.394287 kubelet[2521]: E0702 09:02:48.394263 2521 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:02:48.399180 kubelet[2521]: I0702 09:02:48.399155 2521 factory.go:221] Registration of the systemd container factory successfully Jul 2 09:02:48.401500 kubelet[2521]: I0702 09:02:48.401476 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 09:02:48.402802 kubelet[2521]: I0702 09:02:48.402780 2521 factory.go:221] Registration of the containerd container factory successfully Jul 2 09:02:48.411935 kubelet[2521]: E0702 09:02:48.411905 2521 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:02:48.421489 kubelet[2521]: I0702 09:02:48.421464 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:02:48.423480 kubelet[2521]: I0702 09:02:48.423463 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:02:48.423530 kubelet[2521]: I0702 09:02:48.423484 2521 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:02:48.423530 kubelet[2521]: I0702 09:02:48.423499 2521 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 09:02:48.423569 kubelet[2521]: E0702 09:02:48.423555 2521 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:02:48.447254 sudo[2552]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 09:02:48.447499 sudo[2552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 09:02:48.451365 kubelet[2521]: I0702 09:02:48.451334 2521 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:02:48.451365 kubelet[2521]: I0702 09:02:48.451360 2521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:02:48.451486 kubelet[2521]: I0702 09:02:48.451388 2521 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:02:48.451555 kubelet[2521]: I0702 09:02:48.451541 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 09:02:48.451580 kubelet[2521]: I0702 09:02:48.451565 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 09:02:48.451580 kubelet[2521]: I0702 09:02:48.451572 2521 policy_none.go:49] "None policy: Start" Jul 2 09:02:48.452104 kubelet[2521]: I0702 09:02:48.452086 2521 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 09:02:48.452147 kubelet[2521]: I0702 09:02:48.452115 2521 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:02:48.452264 kubelet[2521]: I0702 09:02:48.452251 2521 state_mem.go:75] "Updated machine memory state" Jul 2 09:02:48.457194 kubelet[2521]: I0702 09:02:48.457172 2521 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:02:48.457852 kubelet[2521]: I0702 09:02:48.457362 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:02:48.498191 kubelet[2521]: I0702 09:02:48.498162 2521 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:02:48.504575 kubelet[2521]: I0702 09:02:48.504307 2521 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 09:02:48.504575 kubelet[2521]: I0702 09:02:48.504426 2521 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 09:02:48.524658 kubelet[2521]: I0702 09:02:48.524624 2521 topology_manager.go:215] "Topology Admit Handler" podUID="7a55cd9926b68b714669c2037fc9f0de" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:02:48.524757 kubelet[2521]: I0702 09:02:48.524710 2521 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:02:48.524811 kubelet[2521]: I0702 09:02:48.524773 2521 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:02:48.594665 kubelet[2521]: I0702 09:02:48.594630 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:48.594665 kubelet[2521]: I0702 09:02:48.594673 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:48.594847 kubelet[2521]: I0702 09:02:48.594695 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:48.594847 kubelet[2521]: I0702 09:02:48.594714 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a55cd9926b68b714669c2037fc9f0de-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a55cd9926b68b714669c2037fc9f0de\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:02:48.594847 kubelet[2521]: I0702 09:02:48.594733 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a55cd9926b68b714669c2037fc9f0de-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a55cd9926b68b714669c2037fc9f0de\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:02:48.594847 kubelet[2521]: I0702 09:02:48.594752 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a55cd9926b68b714669c2037fc9f0de-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a55cd9926b68b714669c2037fc9f0de\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:02:48.594942 kubelet[2521]: I0702 09:02:48.594891 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:48.594942 kubelet[2521]: I0702 09:02:48.594916 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:02:48.594942 kubelet[2521]: I0702 09:02:48.594936 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:02:48.830463 kubelet[2521]: E0702 09:02:48.830355 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:48.830802 kubelet[2521]: E0702 09:02:48.830782 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:48.832341 kubelet[2521]: E0702 09:02:48.832322 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:48.873888 sudo[2552]: pam_unix(sudo:session): session closed for user root Jul 2 09:02:49.389329 kubelet[2521]: I0702 09:02:49.389287 2521 apiserver.go:52] "Watching apiserver" Jul 2 09:02:49.394507 kubelet[2521]: I0702 09:02:49.394471 2521 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:02:49.437818 kubelet[2521]: E0702 09:02:49.436913 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:49.437818 kubelet[2521]: E0702 09:02:49.437192 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:49.437818 kubelet[2521]: E0702 09:02:49.437668 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:49.454964 kubelet[2521]: I0702 09:02:49.454650 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.454504021 podStartE2EDuration="1.454504021s" podCreationTimestamp="2024-07-02 09:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:02:49.454382981 +0000 UTC m=+1.119760961" watchObservedRunningTime="2024-07-02 09:02:49.454504021 +0000 UTC m=+1.119882001" Jul 2 09:02:49.462495 kubelet[2521]: I0702 09:02:49.462453 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4624236210000001 podStartE2EDuration="1.462423621s" podCreationTimestamp="2024-07-02 09:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:02:49.461192981 +0000 UTC m=+1.126570961" watchObservedRunningTime="2024-07-02 09:02:49.462423621 +0000 UTC m=+1.127801561" Jul 2 09:02:49.468629 kubelet[2521]: I0702 09:02:49.468001 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.467974421 podStartE2EDuration="1.467974421s" podCreationTimestamp="2024-07-02 09:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:02:49.467840101 +0000 UTC m=+1.133218081" watchObservedRunningTime="2024-07-02 09:02:49.467974421 +0000 UTC m=+1.133352401" Jul 2 09:02:50.439511 kubelet[2521]: E0702 09:02:50.439480 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:51.180472 sudo[1610]: pam_unix(sudo:session): session closed for user root Jul 2 09:02:51.182157 sshd[1607]: pam_unix(sshd:session): session closed for user core Jul 2 09:02:51.185146 kubelet[2521]: E0702 09:02:51.185115 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:51.185740 systemd[1]: sshd@6-10.0.0.47:22-10.0.0.1:42540.service: Deactivated successfully. Jul 2 09:02:51.187265 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 09:02:51.187439 systemd[1]: session-7.scope: Consumed 6.962s CPU time, 137.9M memory peak, 0B memory swap peak. Jul 2 09:02:51.188629 systemd-logind[1416]: Session 7 logged out. Waiting for processes to exit. Jul 2 09:02:51.189749 systemd-logind[1416]: Removed session 7. Jul 2 09:02:52.291842 kubelet[2521]: E0702 09:02:52.291812 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:02:52.474275 kubelet[2521]: E0702 09:02:52.474204 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:01.192654 kubelet[2521]: E0702 09:03:01.192315 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:02.299605 kubelet[2521]: E0702 09:03:02.299536 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:02.428116 kubelet[2521]: I0702 09:03:02.427699 2521 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 09:03:02.428239 containerd[1433]: time="2024-07-02T09:03:02.428040728Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 09:03:02.428921 kubelet[2521]: I0702 09:03:02.428897 2521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 09:03:02.433575 kubelet[2521]: I0702 09:03:02.433221 2521 topology_manager.go:215] "Topology Admit Handler" podUID="631f99d1-5d09-49d8-84c0-2b7b12eedffb" podNamespace="kube-system" podName="kube-proxy-vjpb6" Jul 2 09:03:02.441655 kubelet[2521]: I0702 09:03:02.441624 2521 topology_manager.go:215] "Topology Admit Handler" podUID="b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" podNamespace="kube-system" podName="cilium-fnhrg" Jul 2 09:03:02.443058 systemd[1]: Created slice kubepods-besteffort-pod631f99d1_5d09_49d8_84c0_2b7b12eedffb.slice - libcontainer container kubepods-besteffort-pod631f99d1_5d09_49d8_84c0_2b7b12eedffb.slice. Jul 2 09:03:02.450521 kubelet[2521]: I0702 09:03:02.450258 2521 topology_manager.go:215] "Topology Admit Handler" podUID="15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f" podNamespace="kube-system" podName="cilium-operator-5cc964979-56vk6" Jul 2 09:03:02.450521 kubelet[2521]: W0702 09:03:02.450395 2521 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 09:03:02.450521 kubelet[2521]: E0702 09:03:02.450427 2521 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 09:03:02.450791 kubelet[2521]: W0702 09:03:02.450752 2521 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 09:03:02.450791 kubelet[2521]: E0702 09:03:02.450787 2521 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 09:03:02.450871 kubelet[2521]: W0702 09:03:02.450824 2521 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 09:03:02.450871 kubelet[2521]: E0702 09:03:02.450835 2521 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 09:03:02.457594 kubelet[2521]: E0702 09:03:02.456328 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:02.459352 systemd[1]: Created slice kubepods-burstable-podb1b8bacf_8329_4f5d_9db2_1fcdb0439c2c.slice - libcontainer container kubepods-burstable-podb1b8bacf_8329_4f5d_9db2_1fcdb0439c2c.slice. Jul 2 09:03:02.465292 systemd[1]: Created slice kubepods-besteffort-pod15bb0c28_7ea9_46ab_b36a_c93fcba3bc5f.slice - libcontainer container kubepods-besteffort-pod15bb0c28_7ea9_46ab_b36a_c93fcba3bc5f.slice. Jul 2 09:03:02.484747 kubelet[2521]: I0702 09:03:02.484708 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hubble-tls\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.484747 kubelet[2521]: I0702 09:03:02.484748 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f-cilium-config-path\") pod \"cilium-operator-5cc964979-56vk6\" (UID: \"15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f\") " pod="kube-system/cilium-operator-5cc964979-56vk6" Jul 2 09:03:02.484896 kubelet[2521]: I0702 09:03:02.484782 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/631f99d1-5d09-49d8-84c0-2b7b12eedffb-xtables-lock\") pod \"kube-proxy-vjpb6\" (UID: \"631f99d1-5d09-49d8-84c0-2b7b12eedffb\") " pod="kube-system/kube-proxy-vjpb6" Jul 2 09:03:02.484896 kubelet[2521]: I0702 09:03:02.484806 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-run\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.484896 kubelet[2521]: I0702 09:03:02.484825 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cni-path\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.484896 kubelet[2521]: I0702 09:03:02.484842 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hostproc\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.484896 kubelet[2521]: I0702 09:03:02.484862 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-config-path\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.484896 kubelet[2521]: I0702 09:03:02.484882 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/631f99d1-5d09-49d8-84c0-2b7b12eedffb-lib-modules\") pod \"kube-proxy-vjpb6\" (UID: \"631f99d1-5d09-49d8-84c0-2b7b12eedffb\") " pod="kube-system/kube-proxy-vjpb6" Jul 2 09:03:02.485072 kubelet[2521]: I0702 09:03:02.484910 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-bpf-maps\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.485072 kubelet[2521]: I0702 09:03:02.484934 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9cz\" (UniqueName: \"kubernetes.io/projected/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f-kube-api-access-4k9cz\") pod \"cilium-operator-5cc964979-56vk6\" (UID: \"15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f\") " pod="kube-system/cilium-operator-5cc964979-56vk6" Jul 2 09:03:02.485072 kubelet[2521]: I0702 09:03:02.484953 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/631f99d1-5d09-49d8-84c0-2b7b12eedffb-kube-proxy\") pod \"kube-proxy-vjpb6\" (UID: \"631f99d1-5d09-49d8-84c0-2b7b12eedffb\") " pod="kube-system/kube-proxy-vjpb6" Jul 2 09:03:02.485713 kubelet[2521]: I0702 09:03:02.485637 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27lg5\" (UniqueName: \"kubernetes.io/projected/631f99d1-5d09-49d8-84c0-2b7b12eedffb-kube-api-access-27lg5\") pod \"kube-proxy-vjpb6\" (UID: \"631f99d1-5d09-49d8-84c0-2b7b12eedffb\") " pod="kube-system/kube-proxy-vjpb6" Jul 2 09:03:02.485713 kubelet[2521]: I0702 09:03:02.485712 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-cgroup\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.485816 kubelet[2521]: I0702 09:03:02.485756 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-xtables-lock\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.485816 kubelet[2521]: I0702 09:03:02.485788 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-clustermesh-secrets\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.485859 kubelet[2521]: I0702 09:03:02.485835 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-host-proc-sys-net\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.485859 kubelet[2521]: I0702 09:03:02.485855 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-host-proc-sys-kernel\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.485951 kubelet[2521]: I0702 09:03:02.485912 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbhb6\" (UniqueName: \"kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-kube-api-access-rbhb6\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.485951 kubelet[2521]: I0702 09:03:02.485946 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-etc-cni-netd\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.486022 kubelet[2521]: I0702 09:03:02.485977 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-lib-modules\") pod \"cilium-fnhrg\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " pod="kube-system/cilium-fnhrg" Jul 2 09:03:02.487077 kubelet[2521]: E0702 09:03:02.486949 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:02.595524 update_engine[1424]: I0702 09:03:02.595419 1424 update_attempter.cc:509] Updating boot flags... Jul 2 09:03:02.635393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2609) Jul 2 09:03:02.661405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2608) Jul 2 09:03:02.691409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2608) Jul 2 09:03:02.754640 kubelet[2521]: E0702 09:03:02.754610 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:02.755609 containerd[1433]: time="2024-07-02T09:03:02.755567770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vjpb6,Uid:631f99d1-5d09-49d8-84c0-2b7b12eedffb,Namespace:kube-system,Attempt:0,}" Jul 2 09:03:02.774084 containerd[1433]: time="2024-07-02T09:03:02.773796750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:03:02.774084 containerd[1433]: time="2024-07-02T09:03:02.774056271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:02.774235 containerd[1433]: time="2024-07-02T09:03:02.774205031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:03:02.774314 containerd[1433]: time="2024-07-02T09:03:02.774222991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:02.799568 systemd[1]: Started cri-containerd-437dd52fb40255c34a4c214c63d88cbdf03caed3b6ca61f8897f6f9811507378.scope - libcontainer container 437dd52fb40255c34a4c214c63d88cbdf03caed3b6ca61f8897f6f9811507378. Jul 2 09:03:02.816452 containerd[1433]: time="2024-07-02T09:03:02.816353690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vjpb6,Uid:631f99d1-5d09-49d8-84c0-2b7b12eedffb,Namespace:kube-system,Attempt:0,} returns sandbox id \"437dd52fb40255c34a4c214c63d88cbdf03caed3b6ca61f8897f6f9811507378\"" Jul 2 09:03:02.817035 kubelet[2521]: E0702 09:03:02.817018 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:02.819197 containerd[1433]: time="2024-07-02T09:03:02.819146420Z" level=info msg="CreateContainer within sandbox \"437dd52fb40255c34a4c214c63d88cbdf03caed3b6ca61f8897f6f9811507378\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 09:03:02.833296 containerd[1433]: time="2024-07-02T09:03:02.833160946Z" level=info msg="CreateContainer within sandbox \"437dd52fb40255c34a4c214c63d88cbdf03caed3b6ca61f8897f6f9811507378\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"696f5a0860a1621534ef66c6a023fb9d5a9f437e3aaf8a2eef007edff49975e5\"" Jul 2 09:03:02.833685 containerd[1433]: time="2024-07-02T09:03:02.833649468Z" level=info msg="StartContainer for \"696f5a0860a1621534ef66c6a023fb9d5a9f437e3aaf8a2eef007edff49975e5\"" Jul 2 09:03:02.860580 systemd[1]: Started cri-containerd-696f5a0860a1621534ef66c6a023fb9d5a9f437e3aaf8a2eef007edff49975e5.scope - libcontainer container 696f5a0860a1621534ef66c6a023fb9d5a9f437e3aaf8a2eef007edff49975e5. Jul 2 09:03:02.884911 containerd[1433]: time="2024-07-02T09:03:02.883782673Z" level=info msg="StartContainer for \"696f5a0860a1621534ef66c6a023fb9d5a9f437e3aaf8a2eef007edff49975e5\" returns successfully" Jul 2 09:03:03.459573 kubelet[2521]: E0702 09:03:03.459524 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:03.466939 kubelet[2521]: I0702 09:03:03.466659 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vjpb6" podStartSLOduration=1.466612783 podStartE2EDuration="1.466612783s" podCreationTimestamp="2024-07-02 09:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:03:03.466437502 +0000 UTC m=+15.131815482" watchObservedRunningTime="2024-07-02 09:03:03.466612783 +0000 UTC m=+15.131990763" Jul 2 09:03:03.587009 kubelet[2521]: E0702 09:03:03.586968 2521 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 09:03:03.587009 kubelet[2521]: E0702 09:03:03.586987 2521 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-fnhrg: failed to sync secret cache: timed out waiting for the condition Jul 2 09:03:03.587110 kubelet[2521]: E0702 09:03:03.587034 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hubble-tls podName:b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c nodeName:}" failed. No retries permitted until 2024-07-02 09:03:04.087018516 +0000 UTC m=+15.752396456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hubble-tls") pod "cilium-fnhrg" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c") : failed to sync secret cache: timed out waiting for the condition Jul 2 09:03:03.588298 kubelet[2521]: E0702 09:03:03.588268 2521 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 2 09:03:03.588342 kubelet[2521]: E0702 09:03:03.588332 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-clustermesh-secrets podName:b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c nodeName:}" failed. No retries permitted until 2024-07-02 09:03:04.08831784 +0000 UTC m=+15.753695820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-clustermesh-secrets") pod "cilium-fnhrg" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c") : failed to sync secret cache: timed out waiting for the condition Jul 2 09:03:03.670947 kubelet[2521]: E0702 09:03:03.670906 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:03.671284 containerd[1433]: time="2024-07-02T09:03:03.671249337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-56vk6,Uid:15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f,Namespace:kube-system,Attempt:0,}" Jul 2 09:03:03.688997 containerd[1433]: time="2024-07-02T09:03:03.688908871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:03:03.688997 containerd[1433]: time="2024-07-02T09:03:03.688973231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:03.689138 containerd[1433]: time="2024-07-02T09:03:03.688999552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:03:03.689138 containerd[1433]: time="2024-07-02T09:03:03.689017232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:03.710526 systemd[1]: Started cri-containerd-2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d.scope - libcontainer container 2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d. Jul 2 09:03:03.736214 containerd[1433]: time="2024-07-02T09:03:03.736109137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-56vk6,Uid:15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d\"" Jul 2 09:03:03.736905 kubelet[2521]: E0702 09:03:03.736881 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:03.740930 containerd[1433]: time="2024-07-02T09:03:03.740686552Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 09:03:04.262382 kubelet[2521]: E0702 09:03:04.262346 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:04.263114 containerd[1433]: time="2024-07-02T09:03:04.262719598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnhrg,Uid:b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c,Namespace:kube-system,Attempt:0,}" Jul 2 09:03:04.280560 containerd[1433]: time="2024-07-02T09:03:04.280481970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:03:04.280560 containerd[1433]: time="2024-07-02T09:03:04.280551010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:04.280750 containerd[1433]: time="2024-07-02T09:03:04.280572250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:03:04.280750 containerd[1433]: time="2024-07-02T09:03:04.280586850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:04.301631 systemd[1]: Started cri-containerd-263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e.scope - libcontainer container 263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e. Jul 2 09:03:04.320645 containerd[1433]: time="2024-07-02T09:03:04.320606766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnhrg,Uid:b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\"" Jul 2 09:03:04.321502 kubelet[2521]: E0702 09:03:04.321473 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:05.990257 containerd[1433]: time="2024-07-02T09:03:05.990213435Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:03:05.991101 containerd[1433]: time="2024-07-02T09:03:05.991062397Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138346" Jul 2 09:03:05.991988 containerd[1433]: time="2024-07-02T09:03:05.991797879Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:03:05.993775 containerd[1433]: time="2024-07-02T09:03:05.993737004Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.253011532s" Jul 2 09:03:05.993865 containerd[1433]: time="2024-07-02T09:03:05.993848445Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 09:03:05.994984 containerd[1433]: time="2024-07-02T09:03:05.994918168Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 09:03:05.996387 containerd[1433]: time="2024-07-02T09:03:05.996320171Z" level=info msg="CreateContainer within sandbox \"2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 09:03:06.008559 containerd[1433]: time="2024-07-02T09:03:06.008518163Z" level=info msg="CreateContainer within sandbox \"2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\"" Jul 2 09:03:06.008951 containerd[1433]: time="2024-07-02T09:03:06.008891884Z" level=info msg="StartContainer for \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\"" Jul 2 09:03:06.038517 systemd[1]: Started cri-containerd-3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2.scope - libcontainer container 3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2. Jul 2 09:03:06.059715 containerd[1433]: time="2024-07-02T09:03:06.059674614Z" level=info msg="StartContainer for \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\" returns successfully" Jul 2 09:03:06.466232 kubelet[2521]: E0702 09:03:06.466205 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:07.467095 kubelet[2521]: E0702 09:03:07.467057 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:15.423787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024942371.mount: Deactivated successfully. Jul 2 09:03:16.621823 containerd[1433]: time="2024-07-02T09:03:16.621772234Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:03:16.622751 containerd[1433]: time="2024-07-02T09:03:16.622705315Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651506" Jul 2 09:03:16.624873 containerd[1433]: time="2024-07-02T09:03:16.623235556Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:03:16.624873 containerd[1433]: time="2024-07-02T09:03:16.624715398Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.62975955s" Jul 2 09:03:16.624873 containerd[1433]: time="2024-07-02T09:03:16.624755238Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 09:03:16.627606 containerd[1433]: time="2024-07-02T09:03:16.627480882Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:03:16.636665 containerd[1433]: time="2024-07-02T09:03:16.635930053Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\"" Jul 2 09:03:16.636665 containerd[1433]: time="2024-07-02T09:03:16.636397574Z" level=info msg="StartContainer for \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\"" Jul 2 09:03:16.636707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667042646.mount: Deactivated successfully. Jul 2 09:03:16.665519 systemd[1]: Started cri-containerd-d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73.scope - libcontainer container d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73. Jul 2 09:03:16.684637 containerd[1433]: time="2024-07-02T09:03:16.683495517Z" level=info msg="StartContainer for \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\" returns successfully" Jul 2 09:03:16.770022 systemd[1]: cri-containerd-d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73.scope: Deactivated successfully. Jul 2 09:03:16.961014 containerd[1433]: time="2024-07-02T09:03:16.960903808Z" level=info msg="shim disconnected" id=d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73 namespace=k8s.io Jul 2 09:03:16.961014 containerd[1433]: time="2024-07-02T09:03:16.960962408Z" level=warning msg="cleaning up after shim disconnected" id=d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73 namespace=k8s.io Jul 2 09:03:16.961014 containerd[1433]: time="2024-07-02T09:03:16.960971288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:03:17.487522 kubelet[2521]: E0702 09:03:17.487475 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:17.490350 containerd[1433]: time="2024-07-02T09:03:17.490313716Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:03:17.503390 containerd[1433]: time="2024-07-02T09:03:17.501259129Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\"" Jul 2 09:03:17.503390 containerd[1433]: time="2024-07-02T09:03:17.502390451Z" level=info msg="StartContainer for \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\"" Jul 2 09:03:17.510190 kubelet[2521]: I0702 09:03:17.510149 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-56vk6" podStartSLOduration=13.254367559 podStartE2EDuration="15.51011382s" podCreationTimestamp="2024-07-02 09:03:02 +0000 UTC" firstStartedPulling="2024-07-02 09:03:03.738628905 +0000 UTC m=+15.404006885" lastFinishedPulling="2024-07-02 09:03:05.994375166 +0000 UTC m=+17.659753146" observedRunningTime="2024-07-02 09:03:06.480499528 +0000 UTC m=+18.145877508" watchObservedRunningTime="2024-07-02 09:03:17.51011382 +0000 UTC m=+29.175491800" Jul 2 09:03:17.529573 systemd[1]: Started cri-containerd-b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d.scope - libcontainer container b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d. Jul 2 09:03:17.549316 containerd[1433]: time="2024-07-02T09:03:17.548613749Z" level=info msg="StartContainer for \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\" returns successfully" Jul 2 09:03:17.575961 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:03:17.576184 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:03:17.576244 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:03:17.586638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:03:17.586820 systemd[1]: cri-containerd-b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d.scope: Deactivated successfully. Jul 2 09:03:17.607814 containerd[1433]: time="2024-07-02T09:03:17.607760863Z" level=info msg="shim disconnected" id=b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d namespace=k8s.io Jul 2 09:03:17.607814 containerd[1433]: time="2024-07-02T09:03:17.607814023Z" level=warning msg="cleaning up after shim disconnected" id=b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d namespace=k8s.io Jul 2 09:03:17.607973 containerd[1433]: time="2024-07-02T09:03:17.607823303Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:03:17.619857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:03:17.634167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73-rootfs.mount: Deactivated successfully. Jul 2 09:03:18.490112 kubelet[2521]: E0702 09:03:18.489942 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:18.492146 containerd[1433]: time="2024-07-02T09:03:18.492107454Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:03:18.521793 containerd[1433]: time="2024-07-02T09:03:18.521652249Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\"" Jul 2 09:03:18.523626 containerd[1433]: time="2024-07-02T09:03:18.522989891Z" level=info msg="StartContainer for \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\"" Jul 2 09:03:18.560604 systemd[1]: Started cri-containerd-c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd.scope - libcontainer container c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd. Jul 2 09:03:18.586832 containerd[1433]: time="2024-07-02T09:03:18.586793646Z" level=info msg="StartContainer for \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\" returns successfully" Jul 2 09:03:18.600985 systemd[1]: cri-containerd-c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd.scope: Deactivated successfully. Jul 2 09:03:18.623127 containerd[1433]: time="2024-07-02T09:03:18.623068568Z" level=info msg="shim disconnected" id=c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd namespace=k8s.io Jul 2 09:03:18.623127 containerd[1433]: time="2024-07-02T09:03:18.623122568Z" level=warning msg="cleaning up after shim disconnected" id=c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd namespace=k8s.io Jul 2 09:03:18.623127 containerd[1433]: time="2024-07-02T09:03:18.623131488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:03:18.634174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd-rootfs.mount: Deactivated successfully. Jul 2 09:03:19.491239 kubelet[2521]: E0702 09:03:19.491203 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:19.494262 containerd[1433]: time="2024-07-02T09:03:19.494226117Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:03:19.506862 containerd[1433]: time="2024-07-02T09:03:19.506735491Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\"" Jul 2 09:03:19.507485 containerd[1433]: time="2024-07-02T09:03:19.507448011Z" level=info msg="StartContainer for \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\"" Jul 2 09:03:19.534544 systemd[1]: Started cri-containerd-c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221.scope - libcontainer container c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221. Jul 2 09:03:19.552996 systemd[1]: cri-containerd-c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221.scope: Deactivated successfully. Jul 2 09:03:19.554653 containerd[1433]: time="2024-07-02T09:03:19.554614464Z" level=info msg="StartContainer for \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\" returns successfully" Jul 2 09:03:19.584951 containerd[1433]: time="2024-07-02T09:03:19.584888497Z" level=info msg="shim disconnected" id=c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221 namespace=k8s.io Jul 2 09:03:19.584951 containerd[1433]: time="2024-07-02T09:03:19.584946177Z" level=warning msg="cleaning up after shim disconnected" id=c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221 namespace=k8s.io Jul 2 09:03:19.584951 containerd[1433]: time="2024-07-02T09:03:19.584955217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:03:19.634356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221-rootfs.mount: Deactivated successfully. Jul 2 09:03:19.965005 systemd[1]: Started sshd@7-10.0.0.47:22-10.0.0.1:33102.service - OpenSSH per-connection server daemon (10.0.0.1:33102). Jul 2 09:03:20.003099 sshd[3214]: Accepted publickey for core from 10.0.0.1 port 33102 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:20.004454 sshd[3214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:20.007886 systemd-logind[1416]: New session 8 of user core. Jul 2 09:03:20.021579 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 09:03:20.138191 sshd[3214]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:20.141697 systemd-logind[1416]: Session 8 logged out. Waiting for processes to exit. Jul 2 09:03:20.141960 systemd[1]: sshd@7-10.0.0.47:22-10.0.0.1:33102.service: Deactivated successfully. Jul 2 09:03:20.145667 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 09:03:20.146730 systemd-logind[1416]: Removed session 8. Jul 2 09:03:20.495726 kubelet[2521]: E0702 09:03:20.495688 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:20.497844 containerd[1433]: time="2024-07-02T09:03:20.497789230Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:03:20.514247 containerd[1433]: time="2024-07-02T09:03:20.514179646Z" level=info msg="CreateContainer within sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\"" Jul 2 09:03:20.514733 containerd[1433]: time="2024-07-02T09:03:20.514611607Z" level=info msg="StartContainer for \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\"" Jul 2 09:03:20.539516 systemd[1]: Started cri-containerd-9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661.scope - libcontainer container 9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661. Jul 2 09:03:20.575007 containerd[1433]: time="2024-07-02T09:03:20.574959109Z" level=info msg="StartContainer for \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\" returns successfully" Jul 2 09:03:20.713211 kubelet[2521]: I0702 09:03:20.712803 2521 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 09:03:20.744174 kubelet[2521]: I0702 09:03:20.744124 2521 topology_manager.go:215] "Topology Admit Handler" podUID="55677920-d17c-43b8-a72b-515bf405cd86" podNamespace="kube-system" podName="coredns-76f75df574-t47sm" Jul 2 09:03:20.749298 kubelet[2521]: I0702 09:03:20.749173 2521 topology_manager.go:215] "Topology Admit Handler" podUID="b135fa54-5401-43cd-aa5d-dd8d43986b7f" podNamespace="kube-system" podName="coredns-76f75df574-9j7g8" Jul 2 09:03:20.770769 systemd[1]: Created slice kubepods-burstable-pod55677920_d17c_43b8_a72b_515bf405cd86.slice - libcontainer container kubepods-burstable-pod55677920_d17c_43b8_a72b_515bf405cd86.slice. Jul 2 09:03:20.779617 systemd[1]: Created slice kubepods-burstable-podb135fa54_5401_43cd_aa5d_dd8d43986b7f.slice - libcontainer container kubepods-burstable-podb135fa54_5401_43cd_aa5d_dd8d43986b7f.slice. Jul 2 09:03:20.812946 kubelet[2521]: I0702 09:03:20.812903 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b135fa54-5401-43cd-aa5d-dd8d43986b7f-config-volume\") pod \"coredns-76f75df574-9j7g8\" (UID: \"b135fa54-5401-43cd-aa5d-dd8d43986b7f\") " pod="kube-system/coredns-76f75df574-9j7g8" Jul 2 09:03:20.813123 kubelet[2521]: I0702 09:03:20.813001 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjl77\" (UniqueName: \"kubernetes.io/projected/55677920-d17c-43b8-a72b-515bf405cd86-kube-api-access-kjl77\") pod \"coredns-76f75df574-t47sm\" (UID: \"55677920-d17c-43b8-a72b-515bf405cd86\") " pod="kube-system/coredns-76f75df574-t47sm" Jul 2 09:03:20.813123 kubelet[2521]: I0702 09:03:20.813043 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tprm2\" (UniqueName: \"kubernetes.io/projected/b135fa54-5401-43cd-aa5d-dd8d43986b7f-kube-api-access-tprm2\") pod \"coredns-76f75df574-9j7g8\" (UID: \"b135fa54-5401-43cd-aa5d-dd8d43986b7f\") " pod="kube-system/coredns-76f75df574-9j7g8" Jul 2 09:03:20.813123 kubelet[2521]: I0702 09:03:20.813069 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55677920-d17c-43b8-a72b-515bf405cd86-config-volume\") pod \"coredns-76f75df574-t47sm\" (UID: \"55677920-d17c-43b8-a72b-515bf405cd86\") " pod="kube-system/coredns-76f75df574-t47sm" Jul 2 09:03:21.074929 kubelet[2521]: E0702 09:03:21.074815 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:21.083868 kubelet[2521]: E0702 09:03:21.083576 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:21.088536 containerd[1433]: time="2024-07-02T09:03:21.087762714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9j7g8,Uid:b135fa54-5401-43cd-aa5d-dd8d43986b7f,Namespace:kube-system,Attempt:0,}" Jul 2 09:03:21.088536 containerd[1433]: time="2024-07-02T09:03:21.087785394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t47sm,Uid:55677920-d17c-43b8-a72b-515bf405cd86,Namespace:kube-system,Attempt:0,}" Jul 2 09:03:21.504327 kubelet[2521]: E0702 09:03:21.504282 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:21.525646 kubelet[2521]: I0702 09:03:21.525607 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fnhrg" podStartSLOduration=7.22262675 podStartE2EDuration="19.525571898s" podCreationTimestamp="2024-07-02 09:03:02 +0000 UTC" firstStartedPulling="2024-07-02 09:03:04.32195129 +0000 UTC m=+15.987329270" lastFinishedPulling="2024-07-02 09:03:16.624896438 +0000 UTC m=+28.290274418" observedRunningTime="2024-07-02 09:03:21.523059856 +0000 UTC m=+33.188437836" watchObservedRunningTime="2024-07-02 09:03:21.525571898 +0000 UTC m=+33.190949918" Jul 2 09:03:22.505200 kubelet[2521]: E0702 09:03:22.505166 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:22.815565 systemd-networkd[1363]: cilium_host: Link UP Jul 2 09:03:22.815714 systemd-networkd[1363]: cilium_net: Link UP Jul 2 09:03:22.815717 systemd-networkd[1363]: cilium_net: Gained carrier Jul 2 09:03:22.815867 systemd-networkd[1363]: cilium_host: Gained carrier Jul 2 09:03:22.900863 systemd-networkd[1363]: cilium_vxlan: Link UP Jul 2 09:03:22.900872 systemd-networkd[1363]: cilium_vxlan: Gained carrier Jul 2 09:03:23.187399 kernel: NET: Registered PF_ALG protocol family Jul 2 09:03:23.222587 systemd-networkd[1363]: cilium_net: Gained IPv6LL Jul 2 09:03:23.507141 kubelet[2521]: E0702 09:03:23.506917 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:23.702555 systemd-networkd[1363]: cilium_host: Gained IPv6LL Jul 2 09:03:23.733380 systemd-networkd[1363]: lxc_health: Link UP Jul 2 09:03:23.741721 systemd-networkd[1363]: lxc_health: Gained carrier Jul 2 09:03:24.186693 systemd-networkd[1363]: lxcc5c9785dd6fb: Link UP Jul 2 09:03:24.196004 systemd-networkd[1363]: lxc243315aa0b97: Link UP Jul 2 09:03:24.208455 kernel: eth0: renamed from tmp61b65 Jul 2 09:03:24.216451 kernel: eth0: renamed from tmpa40a9 Jul 2 09:03:24.230927 systemd-networkd[1363]: lxc243315aa0b97: Gained carrier Jul 2 09:03:24.234683 systemd-networkd[1363]: lxcc5c9785dd6fb: Gained carrier Jul 2 09:03:24.511414 kubelet[2521]: E0702 09:03:24.511288 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:24.919579 systemd-networkd[1363]: cilium_vxlan: Gained IPv6LL Jul 2 09:03:25.110580 systemd-networkd[1363]: lxc_health: Gained IPv6LL Jul 2 09:03:25.153006 systemd[1]: Started sshd@8-10.0.0.47:22-10.0.0.1:58550.service - OpenSSH per-connection server daemon (10.0.0.1:58550). Jul 2 09:03:25.194018 sshd[3754]: Accepted publickey for core from 10.0.0.1 port 58550 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:25.194563 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:25.198219 systemd-logind[1416]: New session 9 of user core. Jul 2 09:03:25.205590 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 09:03:25.323748 sshd[3754]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:25.326984 systemd[1]: sshd@8-10.0.0.47:22-10.0.0.1:58550.service: Deactivated successfully. Jul 2 09:03:25.329861 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 09:03:25.330571 systemd-logind[1416]: Session 9 logged out. Waiting for processes to exit. Jul 2 09:03:25.331421 systemd-logind[1416]: Removed session 9. Jul 2 09:03:25.430541 systemd-networkd[1363]: lxc243315aa0b97: Gained IPv6LL Jul 2 09:03:26.006600 systemd-networkd[1363]: lxcc5c9785dd6fb: Gained IPv6LL Jul 2 09:03:27.696498 containerd[1433]: time="2024-07-02T09:03:27.696408627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:03:27.696498 containerd[1433]: time="2024-07-02T09:03:27.696466347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:27.696498 containerd[1433]: time="2024-07-02T09:03:27.696362707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:03:27.696934 containerd[1433]: time="2024-07-02T09:03:27.696485907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:03:27.696934 containerd[1433]: time="2024-07-02T09:03:27.696505587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:27.696934 containerd[1433]: time="2024-07-02T09:03:27.696551907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:27.696934 containerd[1433]: time="2024-07-02T09:03:27.696649227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:03:27.696934 containerd[1433]: time="2024-07-02T09:03:27.696751107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:03:27.715534 systemd[1]: Started cri-containerd-a40a94563a2a26fa9625da849ba57e4d81eb9bc632e201e04c99fa06453b907c.scope - libcontainer container a40a94563a2a26fa9625da849ba57e4d81eb9bc632e201e04c99fa06453b907c. Jul 2 09:03:27.723065 systemd[1]: Started cri-containerd-61b654b725898102722300f3e204cf77e0186f00a36b966d49b8681b124ec770.scope - libcontainer container 61b654b725898102722300f3e204cf77e0186f00a36b966d49b8681b124ec770. Jul 2 09:03:27.732748 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:03:27.734114 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:03:27.757060 containerd[1433]: time="2024-07-02T09:03:27.757003867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t47sm,Uid:55677920-d17c-43b8-a72b-515bf405cd86,Namespace:kube-system,Attempt:0,} returns sandbox id \"a40a94563a2a26fa9625da849ba57e4d81eb9bc632e201e04c99fa06453b907c\"" Jul 2 09:03:27.757589 containerd[1433]: time="2024-07-02T09:03:27.757534267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9j7g8,Uid:b135fa54-5401-43cd-aa5d-dd8d43986b7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"61b654b725898102722300f3e204cf77e0186f00a36b966d49b8681b124ec770\"" Jul 2 09:03:27.757800 kubelet[2521]: E0702 09:03:27.757777 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:27.759356 kubelet[2521]: E0702 09:03:27.759174 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:27.760805 containerd[1433]: time="2024-07-02T09:03:27.760769750Z" level=info msg="CreateContainer within sandbox \"a40a94563a2a26fa9625da849ba57e4d81eb9bc632e201e04c99fa06453b907c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:03:27.762325 containerd[1433]: time="2024-07-02T09:03:27.762269671Z" level=info msg="CreateContainer within sandbox \"61b654b725898102722300f3e204cf77e0186f00a36b966d49b8681b124ec770\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:03:27.782012 containerd[1433]: time="2024-07-02T09:03:27.781964883Z" level=info msg="CreateContainer within sandbox \"a40a94563a2a26fa9625da849ba57e4d81eb9bc632e201e04c99fa06453b907c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8f28c938e796054a037f360dc4ca7646d9f3e133d161db9e5ca5c5ff1d220cd\"" Jul 2 09:03:27.785487 containerd[1433]: time="2024-07-02T09:03:27.785458446Z" level=info msg="CreateContainer within sandbox \"61b654b725898102722300f3e204cf77e0186f00a36b966d49b8681b124ec770\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05142b7058d86303c0fa85c2d09030fd916f42ad5f14421134590a00daae8218\"" Jul 2 09:03:27.785860 containerd[1433]: time="2024-07-02T09:03:27.785836766Z" level=info msg="StartContainer for \"05142b7058d86303c0fa85c2d09030fd916f42ad5f14421134590a00daae8218\"" Jul 2 09:03:27.787430 containerd[1433]: time="2024-07-02T09:03:27.787398527Z" level=info msg="StartContainer for \"b8f28c938e796054a037f360dc4ca7646d9f3e133d161db9e5ca5c5ff1d220cd\"" Jul 2 09:03:27.808552 systemd[1]: Started cri-containerd-05142b7058d86303c0fa85c2d09030fd916f42ad5f14421134590a00daae8218.scope - libcontainer container 05142b7058d86303c0fa85c2d09030fd916f42ad5f14421134590a00daae8218. Jul 2 09:03:27.817519 systemd[1]: Started cri-containerd-b8f28c938e796054a037f360dc4ca7646d9f3e133d161db9e5ca5c5ff1d220cd.scope - libcontainer container b8f28c938e796054a037f360dc4ca7646d9f3e133d161db9e5ca5c5ff1d220cd. Jul 2 09:03:27.839467 containerd[1433]: time="2024-07-02T09:03:27.837881200Z" level=info msg="StartContainer for \"05142b7058d86303c0fa85c2d09030fd916f42ad5f14421134590a00daae8218\" returns successfully" Jul 2 09:03:27.854266 containerd[1433]: time="2024-07-02T09:03:27.854226731Z" level=info msg="StartContainer for \"b8f28c938e796054a037f360dc4ca7646d9f3e133d161db9e5ca5c5ff1d220cd\" returns successfully" Jul 2 09:03:28.517945 kubelet[2521]: E0702 09:03:28.517842 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:28.520404 kubelet[2521]: E0702 09:03:28.519918 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:28.528784 kubelet[2521]: I0702 09:03:28.528590 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-t47sm" podStartSLOduration=26.528556073 podStartE2EDuration="26.528556073s" podCreationTimestamp="2024-07-02 09:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:03:28.527823753 +0000 UTC m=+40.193201733" watchObservedRunningTime="2024-07-02 09:03:28.528556073 +0000 UTC m=+40.193934053" Jul 2 09:03:28.548293 kubelet[2521]: I0702 09:03:28.548217 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9j7g8" podStartSLOduration=26.548179365 podStartE2EDuration="26.548179365s" podCreationTimestamp="2024-07-02 09:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:03:28.547649125 +0000 UTC m=+40.213027105" watchObservedRunningTime="2024-07-02 09:03:28.548179365 +0000 UTC m=+40.213557345" Jul 2 09:03:28.701500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103691090.mount: Deactivated successfully. Jul 2 09:03:29.521627 kubelet[2521]: E0702 09:03:29.521587 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:29.521627 kubelet[2521]: E0702 09:03:29.521628 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:30.340587 systemd[1]: Started sshd@9-10.0.0.47:22-10.0.0.1:44656.service - OpenSSH per-connection server daemon (10.0.0.1:44656). Jul 2 09:03:30.381743 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 44656 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:30.383065 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:30.386764 systemd-logind[1416]: New session 10 of user core. Jul 2 09:03:30.400514 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 09:03:30.510325 sshd[3947]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:30.519786 systemd[1]: sshd@9-10.0.0.47:22-10.0.0.1:44656.service: Deactivated successfully. Jul 2 09:03:30.521221 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 09:03:30.523136 kubelet[2521]: E0702 09:03:30.523112 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:30.523401 kubelet[2521]: E0702 09:03:30.523167 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:30.524011 systemd-logind[1416]: Session 10 logged out. Waiting for processes to exit. Jul 2 09:03:30.528298 systemd[1]: Started sshd@10-10.0.0.47:22-10.0.0.1:44670.service - OpenSSH per-connection server daemon (10.0.0.1:44670). Jul 2 09:03:30.530082 systemd-logind[1416]: Removed session 10. Jul 2 09:03:30.561708 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 44670 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:30.562786 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:30.566663 systemd-logind[1416]: New session 11 of user core. Jul 2 09:03:30.573567 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 09:03:30.723586 sshd[3962]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:30.734789 systemd[1]: sshd@10-10.0.0.47:22-10.0.0.1:44670.service: Deactivated successfully. Jul 2 09:03:30.738937 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 09:03:30.740778 systemd-logind[1416]: Session 11 logged out. Waiting for processes to exit. Jul 2 09:03:30.754738 systemd[1]: Started sshd@11-10.0.0.47:22-10.0.0.1:44686.service - OpenSSH per-connection server daemon (10.0.0.1:44686). Jul 2 09:03:30.756324 systemd-logind[1416]: Removed session 11. Jul 2 09:03:30.792823 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 44686 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:30.794077 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:30.798121 systemd-logind[1416]: New session 12 of user core. Jul 2 09:03:30.805571 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 09:03:30.921470 sshd[3974]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:30.924702 systemd[1]: sshd@11-10.0.0.47:22-10.0.0.1:44686.service: Deactivated successfully. Jul 2 09:03:30.926313 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 09:03:30.926905 systemd-logind[1416]: Session 12 logged out. Waiting for processes to exit. Jul 2 09:03:30.927658 systemd-logind[1416]: Removed session 12. Jul 2 09:03:31.117864 kubelet[2521]: I0702 09:03:31.117681 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:03:31.118476 kubelet[2521]: E0702 09:03:31.118453 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:31.525257 kubelet[2521]: E0702 09:03:31.525168 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:03:35.936416 systemd[1]: Started sshd@12-10.0.0.47:22-10.0.0.1:44698.service - OpenSSH per-connection server daemon (10.0.0.1:44698). Jul 2 09:03:35.985637 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 44698 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:35.986985 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:35.990516 systemd-logind[1416]: New session 13 of user core. Jul 2 09:03:36.001544 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 09:03:36.110312 sshd[3992]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:36.113315 systemd[1]: sshd@12-10.0.0.47:22-10.0.0.1:44698.service: Deactivated successfully. Jul 2 09:03:36.115119 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 09:03:36.116715 systemd-logind[1416]: Session 13 logged out. Waiting for processes to exit. Jul 2 09:03:36.117855 systemd-logind[1416]: Removed session 13. Jul 2 09:03:41.124304 systemd[1]: Started sshd@13-10.0.0.47:22-10.0.0.1:55750.service - OpenSSH per-connection server daemon (10.0.0.1:55750). Jul 2 09:03:41.163608 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 55750 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:41.164840 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:41.168671 systemd-logind[1416]: New session 14 of user core. Jul 2 09:03:41.178541 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 09:03:41.284545 sshd[4007]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:41.294794 systemd[1]: sshd@13-10.0.0.47:22-10.0.0.1:55750.service: Deactivated successfully. Jul 2 09:03:41.297512 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 09:03:41.299416 systemd-logind[1416]: Session 14 logged out. Waiting for processes to exit. Jul 2 09:03:41.303658 systemd[1]: Started sshd@14-10.0.0.47:22-10.0.0.1:55754.service - OpenSSH per-connection server daemon (10.0.0.1:55754). Jul 2 09:03:41.305015 systemd-logind[1416]: Removed session 14. Jul 2 09:03:41.337336 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 55754 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:41.338486 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:41.342406 systemd-logind[1416]: New session 15 of user core. Jul 2 09:03:41.351523 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 09:03:41.552325 sshd[4021]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:41.562396 systemd[1]: sshd@14-10.0.0.47:22-10.0.0.1:55754.service: Deactivated successfully. Jul 2 09:03:41.564098 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 09:03:41.565501 systemd-logind[1416]: Session 15 logged out. Waiting for processes to exit. Jul 2 09:03:41.567473 systemd[1]: Started sshd@15-10.0.0.47:22-10.0.0.1:55762.service - OpenSSH per-connection server daemon (10.0.0.1:55762). Jul 2 09:03:41.568289 systemd-logind[1416]: Removed session 15. Jul 2 09:03:41.610258 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 55762 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:41.611429 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:41.615900 systemd-logind[1416]: New session 16 of user core. Jul 2 09:03:41.624521 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 09:03:42.803352 sshd[4033]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:42.811669 systemd[1]: sshd@15-10.0.0.47:22-10.0.0.1:55762.service: Deactivated successfully. Jul 2 09:03:42.813576 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 09:03:42.818134 systemd-logind[1416]: Session 16 logged out. Waiting for processes to exit. Jul 2 09:03:42.825760 systemd[1]: Started sshd@16-10.0.0.47:22-10.0.0.1:55770.service - OpenSSH per-connection server daemon (10.0.0.1:55770). Jul 2 09:03:42.826874 systemd-logind[1416]: Removed session 16. Jul 2 09:03:42.858834 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 55770 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:42.860217 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:42.863989 systemd-logind[1416]: New session 17 of user core. Jul 2 09:03:42.879504 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 09:03:43.098728 sshd[4054]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:43.106839 systemd[1]: sshd@16-10.0.0.47:22-10.0.0.1:55770.service: Deactivated successfully. Jul 2 09:03:43.110112 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 09:03:43.111645 systemd-logind[1416]: Session 17 logged out. Waiting for processes to exit. Jul 2 09:03:43.118682 systemd[1]: Started sshd@17-10.0.0.47:22-10.0.0.1:55774.service - OpenSSH per-connection server daemon (10.0.0.1:55774). Jul 2 09:03:43.120294 systemd-logind[1416]: Removed session 17. Jul 2 09:03:43.153013 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 55774 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:43.154412 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:43.158454 systemd-logind[1416]: New session 18 of user core. Jul 2 09:03:43.165543 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 09:03:43.269817 sshd[4067]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:43.273238 systemd[1]: sshd@17-10.0.0.47:22-10.0.0.1:55774.service: Deactivated successfully. Jul 2 09:03:43.274875 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 09:03:43.276528 systemd-logind[1416]: Session 18 logged out. Waiting for processes to exit. Jul 2 09:03:43.277466 systemd-logind[1416]: Removed session 18. Jul 2 09:03:48.280012 systemd[1]: Started sshd@18-10.0.0.47:22-10.0.0.1:55790.service - OpenSSH per-connection server daemon (10.0.0.1:55790). Jul 2 09:03:48.317739 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 55790 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:48.318852 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:48.322431 systemd-logind[1416]: New session 19 of user core. Jul 2 09:03:48.332514 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:03:48.441426 sshd[4084]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:48.444719 systemd[1]: sshd@18-10.0.0.47:22-10.0.0.1:55790.service: Deactivated successfully. Jul 2 09:03:48.446726 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:03:48.447441 systemd-logind[1416]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:03:48.448451 systemd-logind[1416]: Removed session 19. Jul 2 09:03:53.452054 systemd[1]: Started sshd@19-10.0.0.47:22-10.0.0.1:59476.service - OpenSSH per-connection server daemon (10.0.0.1:59476). Jul 2 09:03:53.490443 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 59476 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:53.491720 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:53.495254 systemd-logind[1416]: New session 20 of user core. Jul 2 09:03:53.500513 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:03:53.605455 sshd[4100]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:53.608903 systemd[1]: sshd@19-10.0.0.47:22-10.0.0.1:59476.service: Deactivated successfully. Jul 2 09:03:53.611810 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:03:53.613569 systemd-logind[1416]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:03:53.614485 systemd-logind[1416]: Removed session 20. Jul 2 09:03:58.619896 systemd[1]: Started sshd@20-10.0.0.47:22-10.0.0.1:59482.service - OpenSSH per-connection server daemon (10.0.0.1:59482). Jul 2 09:03:58.657269 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 59482 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:03:58.658494 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:03:58.662470 systemd-logind[1416]: New session 21 of user core. Jul 2 09:03:58.674582 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 09:03:58.783360 sshd[4114]: pam_unix(sshd:session): session closed for user core Jul 2 09:03:58.786802 systemd[1]: sshd@20-10.0.0.47:22-10.0.0.1:59482.service: Deactivated successfully. Jul 2 09:03:58.789393 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 09:03:58.789982 systemd-logind[1416]: Session 21 logged out. Waiting for processes to exit. Jul 2 09:03:58.790809 systemd-logind[1416]: Removed session 21. Jul 2 09:04:03.424795 kubelet[2521]: E0702 09:04:03.424755 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:03.799879 systemd[1]: Started sshd@21-10.0.0.47:22-10.0.0.1:48270.service - OpenSSH per-connection server daemon (10.0.0.1:48270). Jul 2 09:04:03.837860 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 48270 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:04:03.839071 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:04:03.842452 systemd-logind[1416]: New session 22 of user core. Jul 2 09:04:03.853517 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 09:04:03.958351 sshd[4130]: pam_unix(sshd:session): session closed for user core Jul 2 09:04:03.968647 systemd[1]: sshd@21-10.0.0.47:22-10.0.0.1:48270.service: Deactivated successfully. Jul 2 09:04:03.970238 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 09:04:03.971817 systemd-logind[1416]: Session 22 logged out. Waiting for processes to exit. Jul 2 09:04:03.985643 systemd[1]: Started sshd@22-10.0.0.47:22-10.0.0.1:48274.service - OpenSSH per-connection server daemon (10.0.0.1:48274). Jul 2 09:04:03.986719 systemd-logind[1416]: Removed session 22. Jul 2 09:04:04.018804 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 48274 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:04:04.019964 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:04:04.023853 systemd-logind[1416]: New session 23 of user core. Jul 2 09:04:04.039505 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 09:04:05.750127 containerd[1433]: time="2024-07-02T09:04:05.750074525Z" level=info msg="StopContainer for \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\" with timeout 30 (s)" Jul 2 09:04:05.751075 containerd[1433]: time="2024-07-02T09:04:05.750995772Z" level=info msg="Stop container \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\" with signal terminated" Jul 2 09:04:05.762924 systemd[1]: cri-containerd-3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2.scope: Deactivated successfully. Jul 2 09:04:05.776133 containerd[1433]: time="2024-07-02T09:04:05.776046128Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:04:05.780605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2-rootfs.mount: Deactivated successfully. Jul 2 09:04:05.784015 containerd[1433]: time="2024-07-02T09:04:05.783984391Z" level=info msg="StopContainer for \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\" with timeout 2 (s)" Jul 2 09:04:05.784271 containerd[1433]: time="2024-07-02T09:04:05.784245633Z" level=info msg="Stop container \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\" with signal terminated" Jul 2 09:04:05.789866 systemd-networkd[1363]: lxc_health: Link DOWN Jul 2 09:04:05.789871 systemd-networkd[1363]: lxc_health: Lost carrier Jul 2 09:04:05.792525 containerd[1433]: time="2024-07-02T09:04:05.790065278Z" level=info msg="shim disconnected" id=3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2 namespace=k8s.io Jul 2 09:04:05.792525 containerd[1433]: time="2024-07-02T09:04:05.790112079Z" level=warning msg="cleaning up after shim disconnected" id=3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2 namespace=k8s.io Jul 2 09:04:05.792525 containerd[1433]: time="2024-07-02T09:04:05.790120079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:04:05.800268 containerd[1433]: time="2024-07-02T09:04:05.800203758Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:04:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 09:04:05.803390 containerd[1433]: time="2024-07-02T09:04:05.803291782Z" level=info msg="StopContainer for \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\" returns successfully" Jul 2 09:04:05.806583 containerd[1433]: time="2024-07-02T09:04:05.806532887Z" level=info msg="StopPodSandbox for \"2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d\"" Jul 2 09:04:05.806674 containerd[1433]: time="2024-07-02T09:04:05.806591448Z" level=info msg="Container to stop \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:04:05.808467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d-shm.mount: Deactivated successfully. Jul 2 09:04:05.813599 systemd[1]: cri-containerd-2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d.scope: Deactivated successfully. Jul 2 09:04:05.818352 systemd[1]: cri-containerd-9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661.scope: Deactivated successfully. Jul 2 09:04:05.818770 systemd[1]: cri-containerd-9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661.scope: Consumed 6.414s CPU time. Jul 2 09:04:05.835897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d-rootfs.mount: Deactivated successfully. Jul 2 09:04:05.839849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661-rootfs.mount: Deactivated successfully. Jul 2 09:04:05.846211 containerd[1433]: time="2024-07-02T09:04:05.846047837Z" level=info msg="shim disconnected" id=2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d namespace=k8s.io Jul 2 09:04:05.846211 containerd[1433]: time="2024-07-02T09:04:05.846102597Z" level=warning msg="cleaning up after shim disconnected" id=2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d namespace=k8s.io Jul 2 09:04:05.846211 containerd[1433]: time="2024-07-02T09:04:05.846110797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:04:05.846211 containerd[1433]: time="2024-07-02T09:04:05.846160718Z" level=info msg="shim disconnected" id=9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661 namespace=k8s.io Jul 2 09:04:05.846211 containerd[1433]: time="2024-07-02T09:04:05.846200798Z" level=warning msg="cleaning up after shim disconnected" id=9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661 namespace=k8s.io Jul 2 09:04:05.846211 containerd[1433]: time="2024-07-02T09:04:05.846209478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:04:05.858219 containerd[1433]: time="2024-07-02T09:04:05.858076011Z" level=info msg="TearDown network for sandbox \"2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d\" successfully" Jul 2 09:04:05.858219 containerd[1433]: time="2024-07-02T09:04:05.858111932Z" level=info msg="StopPodSandbox for \"2d57459c9dbf64077f92fad9a606cf65d0dc3eea36fd05180c644839591b5a7d\" returns successfully" Jul 2 09:04:05.858593 containerd[1433]: time="2024-07-02T09:04:05.858562975Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:04:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 09:04:05.861346 containerd[1433]: time="2024-07-02T09:04:05.861267316Z" level=info msg="StopContainer for \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\" returns successfully" Jul 2 09:04:05.862107 containerd[1433]: time="2024-07-02T09:04:05.862085123Z" level=info msg="StopPodSandbox for \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\"" Jul 2 09:04:05.862383 containerd[1433]: time="2024-07-02T09:04:05.862199484Z" level=info msg="Container to stop \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:04:05.862383 containerd[1433]: time="2024-07-02T09:04:05.862253084Z" level=info msg="Container to stop \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:04:05.862383 containerd[1433]: time="2024-07-02T09:04:05.862264524Z" level=info msg="Container to stop \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:04:05.862383 containerd[1433]: time="2024-07-02T09:04:05.862274004Z" level=info msg="Container to stop \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:04:05.862383 containerd[1433]: time="2024-07-02T09:04:05.862284804Z" level=info msg="Container to stop \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:04:05.868956 systemd[1]: cri-containerd-263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e.scope: Deactivated successfully. Jul 2 09:04:05.889879 containerd[1433]: time="2024-07-02T09:04:05.889805300Z" level=info msg="shim disconnected" id=263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e namespace=k8s.io Jul 2 09:04:05.889879 containerd[1433]: time="2024-07-02T09:04:05.889873140Z" level=warning msg="cleaning up after shim disconnected" id=263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e namespace=k8s.io Jul 2 09:04:05.889879 containerd[1433]: time="2024-07-02T09:04:05.889882301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:04:05.902947 containerd[1433]: time="2024-07-02T09:04:05.902901683Z" level=info msg="TearDown network for sandbox \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" successfully" Jul 2 09:04:05.902947 containerd[1433]: time="2024-07-02T09:04:05.902938243Z" level=info msg="StopPodSandbox for \"263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e\" returns successfully" Jul 2 09:04:06.051174 kubelet[2521]: I0702 09:04:06.051060 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-host-proc-sys-kernel\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051174 kubelet[2521]: I0702 09:04:06.051101 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-etc-cni-netd\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051174 kubelet[2521]: I0702 09:04:06.051120 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-run\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051174 kubelet[2521]: I0702 09:04:06.051155 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-lib-modules\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051174 kubelet[2521]: I0702 09:04:06.051181 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hubble-tls\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051646 kubelet[2521]: I0702 09:04:06.051174 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.051646 kubelet[2521]: I0702 09:04:06.051216 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.051646 kubelet[2521]: I0702 09:04:06.051225 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.051646 kubelet[2521]: I0702 09:04:06.051247 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-cgroup\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051646 kubelet[2521]: I0702 09:04:06.051244 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.051764 kubelet[2521]: I0702 09:04:06.051269 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-clustermesh-secrets\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051764 kubelet[2521]: I0702 09:04:06.051286 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.051764 kubelet[2521]: I0702 09:04:06.051325 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f-cilium-config-path\") pod \"15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f\" (UID: \"15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f\") " Jul 2 09:04:06.051764 kubelet[2521]: I0702 09:04:06.051592 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-config-path\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051764 kubelet[2521]: I0702 09:04:06.051624 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cni-path\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051764 kubelet[2521]: I0702 09:04:06.051655 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-host-proc-sys-net\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051892 kubelet[2521]: I0702 09:04:06.051673 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hostproc\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051892 kubelet[2521]: I0702 09:04:06.051696 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k9cz\" (UniqueName: \"kubernetes.io/projected/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f-kube-api-access-4k9cz\") pod \"15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f\" (UID: \"15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f\") " Jul 2 09:04:06.051892 kubelet[2521]: I0702 09:04:06.051716 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-xtables-lock\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051892 kubelet[2521]: I0702 09:04:06.051753 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-bpf-maps\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051892 kubelet[2521]: I0702 09:04:06.051776 2521 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbhb6\" (UniqueName: \"kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-kube-api-access-rbhb6\") pod \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\" (UID: \"b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c\") " Jul 2 09:04:06.051892 kubelet[2521]: I0702 09:04:06.051812 2521 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.051892 kubelet[2521]: I0702 09:04:06.051823 2521 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.052033 kubelet[2521]: I0702 09:04:06.051832 2521 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.052033 kubelet[2521]: I0702 09:04:06.051841 2521 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.052033 kubelet[2521]: I0702 09:04:06.051851 2521 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.053812 kubelet[2521]: I0702 09:04:06.053776 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f" (UID: "15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:04:06.053812 kubelet[2521]: I0702 09:04:06.053797 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cni-path" (OuterVolumeSpecName: "cni-path") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.053893 kubelet[2521]: I0702 09:04:06.053832 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.053893 kubelet[2521]: I0702 09:04:06.053853 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.053893 kubelet[2521]: I0702 09:04:06.053876 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.053962 kubelet[2521]: I0702 09:04:06.053894 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hostproc" (OuterVolumeSpecName: "hostproc") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:04:06.055808 kubelet[2521]: I0702 09:04:06.055690 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:04:06.055888 kubelet[2521]: I0702 09:04:06.055861 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:04:06.056309 kubelet[2521]: I0702 09:04:06.056258 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f-kube-api-access-4k9cz" (OuterVolumeSpecName: "kube-api-access-4k9cz") pod "15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f" (UID: "15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f"). InnerVolumeSpecName "kube-api-access-4k9cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:04:06.056510 kubelet[2521]: I0702 09:04:06.056454 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 09:04:06.057273 kubelet[2521]: I0702 09:04:06.057235 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-kube-api-access-rbhb6" (OuterVolumeSpecName: "kube-api-access-rbhb6") pod "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" (UID: "b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c"). InnerVolumeSpecName "kube-api-access-rbhb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:04:06.152501 kubelet[2521]: I0702 09:04:06.152470 2521 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152797 kubelet[2521]: I0702 09:04:06.152658 2521 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152797 kubelet[2521]: I0702 09:04:06.152678 2521 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152797 kubelet[2521]: I0702 09:04:06.152690 2521 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152797 kubelet[2521]: I0702 09:04:06.152700 2521 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152797 kubelet[2521]: I0702 09:04:06.152710 2521 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152797 kubelet[2521]: I0702 09:04:06.152719 2521 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4k9cz\" (UniqueName: \"kubernetes.io/projected/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f-kube-api-access-4k9cz\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152797 kubelet[2521]: I0702 09:04:06.152729 2521 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152797 kubelet[2521]: I0702 09:04:06.152758 2521 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152979 kubelet[2521]: I0702 09:04:06.152768 2521 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rbhb6\" (UniqueName: \"kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-kube-api-access-rbhb6\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.152979 kubelet[2521]: I0702 09:04:06.152779 2521 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 09:04:06.431041 systemd[1]: Removed slice kubepods-besteffort-pod15bb0c28_7ea9_46ab_b36a_c93fcba3bc5f.slice - libcontainer container kubepods-besteffort-pod15bb0c28_7ea9_46ab_b36a_c93fcba3bc5f.slice. Jul 2 09:04:06.434289 systemd[1]: Removed slice kubepods-burstable-podb1b8bacf_8329_4f5d_9db2_1fcdb0439c2c.slice - libcontainer container kubepods-burstable-podb1b8bacf_8329_4f5d_9db2_1fcdb0439c2c.slice. Jul 2 09:04:06.434887 systemd[1]: kubepods-burstable-podb1b8bacf_8329_4f5d_9db2_1fcdb0439c2c.slice: Consumed 6.593s CPU time. Jul 2 09:04:06.614700 kubelet[2521]: I0702 09:04:06.614670 2521 scope.go:117] "RemoveContainer" containerID="9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661" Jul 2 09:04:06.615970 containerd[1433]: time="2024-07-02T09:04:06.615936503Z" level=info msg="RemoveContainer for \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\"" Jul 2 09:04:06.621812 containerd[1433]: time="2024-07-02T09:04:06.621777667Z" level=info msg="RemoveContainer for \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\" returns successfully" Jul 2 09:04:06.621980 kubelet[2521]: I0702 09:04:06.621957 2521 scope.go:117] "RemoveContainer" containerID="c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221" Jul 2 09:04:06.623043 containerd[1433]: time="2024-07-02T09:04:06.622992877Z" level=info msg="RemoveContainer for \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\"" Jul 2 09:04:06.625555 containerd[1433]: time="2024-07-02T09:04:06.625463415Z" level=info msg="RemoveContainer for \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\" returns successfully" Jul 2 09:04:06.625672 kubelet[2521]: I0702 09:04:06.625647 2521 scope.go:117] "RemoveContainer" containerID="c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd" Jul 2 09:04:06.627010 containerd[1433]: time="2024-07-02T09:04:06.626753225Z" level=info msg="RemoveContainer for \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\"" Jul 2 09:04:06.630061 containerd[1433]: time="2024-07-02T09:04:06.629736528Z" level=info msg="RemoveContainer for \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\" returns successfully" Jul 2 09:04:06.630305 kubelet[2521]: I0702 09:04:06.630283 2521 scope.go:117] "RemoveContainer" containerID="b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d" Jul 2 09:04:06.631171 containerd[1433]: time="2024-07-02T09:04:06.631151259Z" level=info msg="RemoveContainer for \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\"" Jul 2 09:04:06.633550 containerd[1433]: time="2024-07-02T09:04:06.633510437Z" level=info msg="RemoveContainer for \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\" returns successfully" Jul 2 09:04:06.633826 kubelet[2521]: I0702 09:04:06.633800 2521 scope.go:117] "RemoveContainer" containerID="d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73" Jul 2 09:04:06.635486 containerd[1433]: time="2024-07-02T09:04:06.635451852Z" level=info msg="RemoveContainer for \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\"" Jul 2 09:04:06.638240 containerd[1433]: time="2024-07-02T09:04:06.638211393Z" level=info msg="RemoveContainer for \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\" returns successfully" Jul 2 09:04:06.638418 kubelet[2521]: I0702 09:04:06.638398 2521 scope.go:117] "RemoveContainer" containerID="9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661" Jul 2 09:04:06.644016 containerd[1433]: time="2024-07-02T09:04:06.638572995Z" level=error msg="ContainerStatus for \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\": not found" Jul 2 09:04:06.644146 kubelet[2521]: E0702 09:04:06.644121 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\": not found" containerID="9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661" Jul 2 09:04:06.644233 kubelet[2521]: I0702 09:04:06.644215 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661"} err="failed to get container status \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e1888684fbd455164dbef10c0ecea79713d5db169fd3ac330feb5b40deb4661\": not found" Jul 2 09:04:06.644268 kubelet[2521]: I0702 09:04:06.644234 2521 scope.go:117] "RemoveContainer" containerID="c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221" Jul 2 09:04:06.644447 containerd[1433]: time="2024-07-02T09:04:06.644406880Z" level=error msg="ContainerStatus for \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\": not found" Jul 2 09:04:06.644546 kubelet[2521]: E0702 09:04:06.644528 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\": not found" containerID="c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221" Jul 2 09:04:06.644578 kubelet[2521]: I0702 09:04:06.644560 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221"} err="failed to get container status \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4f83d70ea7a589c3dd5cf7d48e07bdc5d28da434b8ffc76e9a15742f463a221\": not found" Jul 2 09:04:06.644578 kubelet[2521]: I0702 09:04:06.644574 2521 scope.go:117] "RemoveContainer" containerID="c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd" Jul 2 09:04:06.644784 containerd[1433]: time="2024-07-02T09:04:06.644749322Z" level=error msg="ContainerStatus for \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\": not found" Jul 2 09:04:06.644917 kubelet[2521]: E0702 09:04:06.644881 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\": not found" containerID="c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd" Jul 2 09:04:06.644917 kubelet[2521]: I0702 09:04:06.644915 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd"} err="failed to get container status \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"c400458c856f399569185ae7fcc1d9b8eb08dcc8aa02b5b7fa67c42f38e314bd\": not found" Jul 2 09:04:06.644977 kubelet[2521]: I0702 09:04:06.644926 2521 scope.go:117] "RemoveContainer" containerID="b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d" Jul 2 09:04:06.645196 containerd[1433]: time="2024-07-02T09:04:06.645137045Z" level=error msg="ContainerStatus for \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\": not found" Jul 2 09:04:06.645275 kubelet[2521]: E0702 09:04:06.645255 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\": not found" containerID="b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d" Jul 2 09:04:06.645312 kubelet[2521]: I0702 09:04:06.645304 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d"} err="failed to get container status \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b84459ecdd5d4198f7d4cd2ca624f46e1e3c12655004ff02e5ea09192f53157d\": not found" Jul 2 09:04:06.645353 kubelet[2521]: I0702 09:04:06.645315 2521 scope.go:117] "RemoveContainer" containerID="d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73" Jul 2 09:04:06.645509 containerd[1433]: time="2024-07-02T09:04:06.645479048Z" level=error msg="ContainerStatus for \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\": not found" Jul 2 09:04:06.645621 kubelet[2521]: E0702 09:04:06.645605 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\": not found" containerID="d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73" Jul 2 09:04:06.645677 kubelet[2521]: I0702 09:04:06.645647 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73"} err="failed to get container status \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9df5f24f58375506a2ac0a9d6ec424a345e7c742e3a6576c81fc5a922736b73\": not found" Jul 2 09:04:06.645677 kubelet[2521]: I0702 09:04:06.645658 2521 scope.go:117] "RemoveContainer" containerID="3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2" Jul 2 09:04:06.646789 containerd[1433]: time="2024-07-02T09:04:06.646545736Z" level=info msg="RemoveContainer for \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\"" Jul 2 09:04:06.648676 containerd[1433]: time="2024-07-02T09:04:06.648573992Z" level=info msg="RemoveContainer for \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\" returns successfully" Jul 2 09:04:06.648762 kubelet[2521]: I0702 09:04:06.648713 2521 scope.go:117] "RemoveContainer" containerID="3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2" Jul 2 09:04:06.648898 containerd[1433]: time="2024-07-02T09:04:06.648868594Z" level=error msg="ContainerStatus for \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\": not found" Jul 2 09:04:06.649027 kubelet[2521]: E0702 09:04:06.649004 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\": not found" containerID="3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2" Jul 2 09:04:06.649090 kubelet[2521]: I0702 09:04:06.649060 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2"} err="failed to get container status \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"3959a8119c9c83c2b728f743fd65b4860010e14bf49df0ea880012349a0742d2\": not found" Jul 2 09:04:06.762104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e-rootfs.mount: Deactivated successfully. Jul 2 09:04:06.762205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-263b309e40e73ded1c07fa1de498d64687619b1a5419e9a8e9e9b4047aa6800e-shm.mount: Deactivated successfully. Jul 2 09:04:06.762255 systemd[1]: var-lib-kubelet-pods-b1b8bacf\x2d8329\x2d4f5d\x2d9db2\x2d1fcdb0439c2c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 09:04:06.762308 systemd[1]: var-lib-kubelet-pods-b1b8bacf\x2d8329\x2d4f5d\x2d9db2\x2d1fcdb0439c2c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 09:04:06.762383 systemd[1]: var-lib-kubelet-pods-15bb0c28\x2d7ea9\x2d46ab\x2db36a\x2dc93fcba3bc5f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4k9cz.mount: Deactivated successfully. Jul 2 09:04:06.762444 systemd[1]: var-lib-kubelet-pods-b1b8bacf\x2d8329\x2d4f5d\x2d9db2\x2d1fcdb0439c2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drbhb6.mount: Deactivated successfully. Jul 2 09:04:07.718598 sshd[4144]: pam_unix(sshd:session): session closed for user core Jul 2 09:04:07.734901 systemd[1]: sshd@22-10.0.0.47:22-10.0.0.1:48274.service: Deactivated successfully. Jul 2 09:04:07.737417 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 09:04:07.737602 systemd[1]: session-23.scope: Consumed 1.056s CPU time. Jul 2 09:04:07.738699 systemd-logind[1416]: Session 23 logged out. Waiting for processes to exit. Jul 2 09:04:07.751631 systemd[1]: Started sshd@23-10.0.0.47:22-10.0.0.1:48284.service - OpenSSH per-connection server daemon (10.0.0.1:48284). Jul 2 09:04:07.752567 systemd-logind[1416]: Removed session 23. Jul 2 09:04:07.785913 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 48284 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:04:07.787075 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:04:07.790591 systemd-logind[1416]: New session 24 of user core. Jul 2 09:04:07.802511 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 09:04:08.426766 kubelet[2521]: I0702 09:04:08.426726 2521 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f" path="/var/lib/kubelet/pods/15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f/volumes" Jul 2 09:04:08.427129 kubelet[2521]: I0702 09:04:08.427110 2521 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" path="/var/lib/kubelet/pods/b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c/volumes" Jul 2 09:04:08.477504 kubelet[2521]: E0702 09:04:08.477468 2521 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 09:04:08.878698 sshd[4308]: pam_unix(sshd:session): session closed for user core Jul 2 09:04:08.887070 systemd[1]: sshd@23-10.0.0.47:22-10.0.0.1:48284.service: Deactivated successfully. Jul 2 09:04:08.891891 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 09:04:08.894122 systemd-logind[1416]: Session 24 logged out. Waiting for processes to exit. Jul 2 09:04:08.898147 kubelet[2521]: I0702 09:04:08.898091 2521 topology_manager.go:215] "Topology Admit Handler" podUID="22689775-ccb9-4c7e-9192-26f66b5b44ae" podNamespace="kube-system" podName="cilium-qj7nj" Jul 2 09:04:08.898229 kubelet[2521]: E0702 09:04:08.898174 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" containerName="clean-cilium-state" Jul 2 09:04:08.898229 kubelet[2521]: E0702 09:04:08.898185 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" containerName="cilium-agent" Jul 2 09:04:08.898229 kubelet[2521]: E0702 09:04:08.898193 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" containerName="apply-sysctl-overwrites" Jul 2 09:04:08.898229 kubelet[2521]: E0702 09:04:08.898200 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" containerName="mount-bpf-fs" Jul 2 09:04:08.898229 kubelet[2521]: E0702 09:04:08.898207 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f" containerName="cilium-operator" Jul 2 09:04:08.898229 kubelet[2521]: E0702 09:04:08.898214 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" containerName="mount-cgroup" Jul 2 09:04:08.898229 kubelet[2521]: I0702 09:04:08.898234 2521 memory_manager.go:354] "RemoveStaleState removing state" podUID="15bb0c28-7ea9-46ab-b36a-c93fcba3bc5f" containerName="cilium-operator" Jul 2 09:04:08.898427 kubelet[2521]: I0702 09:04:08.898240 2521 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8bacf-8329-4f5d-9db2-1fcdb0439c2c" containerName="cilium-agent" Jul 2 09:04:08.901919 systemd[1]: Started sshd@24-10.0.0.47:22-10.0.0.1:48288.service - OpenSSH per-connection server daemon (10.0.0.1:48288). Jul 2 09:04:08.904837 systemd-logind[1416]: Removed session 24. Jul 2 09:04:08.916146 systemd[1]: Created slice kubepods-burstable-pod22689775_ccb9_4c7e_9192_26f66b5b44ae.slice - libcontainer container kubepods-burstable-pod22689775_ccb9_4c7e_9192_26f66b5b44ae.slice. Jul 2 09:04:08.949654 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 48288 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:04:08.950912 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:04:08.954328 systemd-logind[1416]: New session 25 of user core. Jul 2 09:04:08.962576 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:04:09.013191 sshd[4321]: pam_unix(sshd:session): session closed for user core Jul 2 09:04:09.027142 systemd[1]: sshd@24-10.0.0.47:22-10.0.0.1:48288.service: Deactivated successfully. Jul 2 09:04:09.029460 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:04:09.031222 systemd-logind[1416]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:04:09.046720 systemd[1]: Started sshd@25-10.0.0.47:22-10.0.0.1:48294.service - OpenSSH per-connection server daemon (10.0.0.1:48294). Jul 2 09:04:09.047920 systemd-logind[1416]: Removed session 25. Jul 2 09:04:09.069801 kubelet[2521]: I0702 09:04:09.069750 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-cilium-run\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.069801 kubelet[2521]: I0702 09:04:09.069795 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqxw\" (UniqueName: \"kubernetes.io/projected/22689775-ccb9-4c7e-9192-26f66b5b44ae-kube-api-access-nbqxw\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.069935 kubelet[2521]: I0702 09:04:09.069865 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-etc-cni-netd\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.069935 kubelet[2521]: I0702 09:04:09.069913 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22689775-ccb9-4c7e-9192-26f66b5b44ae-cilium-config-path\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.069992 kubelet[2521]: I0702 09:04:09.069951 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-bpf-maps\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.069992 kubelet[2521]: I0702 09:04:09.069977 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-host-proc-sys-kernel\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070038 kubelet[2521]: I0702 09:04:09.070009 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-hostproc\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070038 kubelet[2521]: I0702 09:04:09.070031 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-lib-modules\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070079 kubelet[2521]: I0702 09:04:09.070050 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-xtables-lock\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070079 kubelet[2521]: I0702 09:04:09.070069 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22689775-ccb9-4c7e-9192-26f66b5b44ae-clustermesh-secrets\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070123 kubelet[2521]: I0702 09:04:09.070088 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/22689775-ccb9-4c7e-9192-26f66b5b44ae-cilium-ipsec-secrets\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070123 kubelet[2521]: I0702 09:04:09.070106 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-host-proc-sys-net\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070165 kubelet[2521]: I0702 09:04:09.070125 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22689775-ccb9-4c7e-9192-26f66b5b44ae-hubble-tls\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070165 kubelet[2521]: I0702 09:04:09.070145 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-cilium-cgroup\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.070209 kubelet[2521]: I0702 09:04:09.070204 2521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22689775-ccb9-4c7e-9192-26f66b5b44ae-cni-path\") pod \"cilium-qj7nj\" (UID: \"22689775-ccb9-4c7e-9192-26f66b5b44ae\") " pod="kube-system/cilium-qj7nj" Jul 2 09:04:09.081360 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 48294 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:04:09.082672 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:04:09.086427 systemd-logind[1416]: New session 26 of user core. Jul 2 09:04:09.092503 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 09:04:09.222664 kubelet[2521]: E0702 09:04:09.222610 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:09.223300 containerd[1433]: time="2024-07-02T09:04:09.223264985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qj7nj,Uid:22689775-ccb9-4c7e-9192-26f66b5b44ae,Namespace:kube-system,Attempt:0,}" Jul 2 09:04:09.239818 containerd[1433]: time="2024-07-02T09:04:09.239688381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:04:09.239818 containerd[1433]: time="2024-07-02T09:04:09.239753942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:04:09.239818 containerd[1433]: time="2024-07-02T09:04:09.239786302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:04:09.240482 containerd[1433]: time="2024-07-02T09:04:09.240006143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:04:09.255558 systemd[1]: Started cri-containerd-b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9.scope - libcontainer container b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9. Jul 2 09:04:09.273938 containerd[1433]: time="2024-07-02T09:04:09.273898822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qj7nj,Uid:22689775-ccb9-4c7e-9192-26f66b5b44ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\"" Jul 2 09:04:09.274651 kubelet[2521]: E0702 09:04:09.274623 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:09.277133 containerd[1433]: time="2024-07-02T09:04:09.277004604Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:04:09.286362 containerd[1433]: time="2024-07-02T09:04:09.286320910Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92bed0c80bb07d73c328634988918445810dae2709f980b77a387ba8ac168e26\"" Jul 2 09:04:09.298916 containerd[1433]: time="2024-07-02T09:04:09.295705256Z" level=info msg="StartContainer for \"92bed0c80bb07d73c328634988918445810dae2709f980b77a387ba8ac168e26\"" Jul 2 09:04:09.323522 systemd[1]: Started cri-containerd-92bed0c80bb07d73c328634988918445810dae2709f980b77a387ba8ac168e26.scope - libcontainer container 92bed0c80bb07d73c328634988918445810dae2709f980b77a387ba8ac168e26. Jul 2 09:04:09.343333 containerd[1433]: time="2024-07-02T09:04:09.343296111Z" level=info msg="StartContainer for \"92bed0c80bb07d73c328634988918445810dae2709f980b77a387ba8ac168e26\" returns successfully" Jul 2 09:04:09.357467 systemd[1]: cri-containerd-92bed0c80bb07d73c328634988918445810dae2709f980b77a387ba8ac168e26.scope: Deactivated successfully. Jul 2 09:04:09.381740 containerd[1433]: time="2024-07-02T09:04:09.381655061Z" level=info msg="shim disconnected" id=92bed0c80bb07d73c328634988918445810dae2709f980b77a387ba8ac168e26 namespace=k8s.io Jul 2 09:04:09.381740 containerd[1433]: time="2024-07-02T09:04:09.381715461Z" level=warning msg="cleaning up after shim disconnected" id=92bed0c80bb07d73c328634988918445810dae2709f980b77a387ba8ac168e26 namespace=k8s.io Jul 2 09:04:09.381740 containerd[1433]: time="2024-07-02T09:04:09.381726902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:04:09.625535 kubelet[2521]: E0702 09:04:09.625418 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:09.628243 containerd[1433]: time="2024-07-02T09:04:09.628138037Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:04:09.638848 containerd[1433]: time="2024-07-02T09:04:09.638736032Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09dfb00a1917ea3c1332cd362ea7230c0129fb6c53cbf5689239e6ad24c2d2e1\"" Jul 2 09:04:09.639857 containerd[1433]: time="2024-07-02T09:04:09.639817559Z" level=info msg="StartContainer for \"09dfb00a1917ea3c1332cd362ea7230c0129fb6c53cbf5689239e6ad24c2d2e1\"" Jul 2 09:04:09.662548 systemd[1]: Started cri-containerd-09dfb00a1917ea3c1332cd362ea7230c0129fb6c53cbf5689239e6ad24c2d2e1.scope - libcontainer container 09dfb00a1917ea3c1332cd362ea7230c0129fb6c53cbf5689239e6ad24c2d2e1. Jul 2 09:04:09.682915 containerd[1433]: time="2024-07-02T09:04:09.682812622Z" level=info msg="StartContainer for \"09dfb00a1917ea3c1332cd362ea7230c0129fb6c53cbf5689239e6ad24c2d2e1\" returns successfully" Jul 2 09:04:09.690024 systemd[1]: cri-containerd-09dfb00a1917ea3c1332cd362ea7230c0129fb6c53cbf5689239e6ad24c2d2e1.scope: Deactivated successfully. Jul 2 09:04:09.709732 containerd[1433]: time="2024-07-02T09:04:09.709670251Z" level=info msg="shim disconnected" id=09dfb00a1917ea3c1332cd362ea7230c0129fb6c53cbf5689239e6ad24c2d2e1 namespace=k8s.io Jul 2 09:04:09.709732 containerd[1433]: time="2024-07-02T09:04:09.709726132Z" level=warning msg="cleaning up after shim disconnected" id=09dfb00a1917ea3c1332cd362ea7230c0129fb6c53cbf5689239e6ad24c2d2e1 namespace=k8s.io Jul 2 09:04:09.709732 containerd[1433]: time="2024-07-02T09:04:09.709734612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:04:10.112105 kubelet[2521]: I0702 09:04:10.112030 2521 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T09:04:10Z","lastTransitionTime":"2024-07-02T09:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 09:04:10.627817 kubelet[2521]: E0702 09:04:10.627788 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:10.631039 containerd[1433]: time="2024-07-02T09:04:10.630899184Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:04:10.644047 containerd[1433]: time="2024-07-02T09:04:10.643924114Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888\"" Jul 2 09:04:10.644378 containerd[1433]: time="2024-07-02T09:04:10.644345236Z" level=info msg="StartContainer for \"7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888\"" Jul 2 09:04:10.673583 systemd[1]: Started cri-containerd-7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888.scope - libcontainer container 7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888. Jul 2 09:04:10.694575 containerd[1433]: time="2024-07-02T09:04:10.694523741Z" level=info msg="StartContainer for \"7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888\" returns successfully" Jul 2 09:04:10.697178 systemd[1]: cri-containerd-7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888.scope: Deactivated successfully. Jul 2 09:04:10.717044 containerd[1433]: time="2024-07-02T09:04:10.716983855Z" level=info msg="shim disconnected" id=7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888 namespace=k8s.io Jul 2 09:04:10.717044 containerd[1433]: time="2024-07-02T09:04:10.717035455Z" level=warning msg="cleaning up after shim disconnected" id=7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888 namespace=k8s.io Jul 2 09:04:10.717044 containerd[1433]: time="2024-07-02T09:04:10.717045055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:04:11.175743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b51a0d86302cc5a5ede70078048d78b86c81b76cad5dffe03eb7720ed7ba888-rootfs.mount: Deactivated successfully. Jul 2 09:04:11.631639 kubelet[2521]: E0702 09:04:11.631523 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:11.633897 containerd[1433]: time="2024-07-02T09:04:11.633615830Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:04:11.696968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9369648.mount: Deactivated successfully. Jul 2 09:04:11.756953 containerd[1433]: time="2024-07-02T09:04:11.756904654Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc\"" Jul 2 09:04:11.757560 containerd[1433]: time="2024-07-02T09:04:11.757429578Z" level=info msg="StartContainer for \"963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc\"" Jul 2 09:04:11.781547 systemd[1]: Started cri-containerd-963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc.scope - libcontainer container 963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc. Jul 2 09:04:11.798713 systemd[1]: cri-containerd-963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc.scope: Deactivated successfully. Jul 2 09:04:11.802353 containerd[1433]: time="2024-07-02T09:04:11.801007749Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22689775_ccb9_4c7e_9192_26f66b5b44ae.slice/cri-containerd-963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc.scope/memory.events\": no such file or directory" Jul 2 09:04:11.821737 containerd[1433]: time="2024-07-02T09:04:11.821665047Z" level=info msg="StartContainer for \"963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc\" returns successfully" Jul 2 09:04:11.880308 containerd[1433]: time="2024-07-02T09:04:11.880248839Z" level=info msg="shim disconnected" id=963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc namespace=k8s.io Jul 2 09:04:11.880308 containerd[1433]: time="2024-07-02T09:04:11.880303679Z" level=warning msg="cleaning up after shim disconnected" id=963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc namespace=k8s.io Jul 2 09:04:11.880308 containerd[1433]: time="2024-07-02T09:04:11.880313839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:04:12.175300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-963bf6ab675d1a0f371d38bd6b2c7be0db0d464d9aa258dcc1199a7bed2eeecc-rootfs.mount: Deactivated successfully. Jul 2 09:04:12.635363 kubelet[2521]: E0702 09:04:12.635086 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:12.638272 containerd[1433]: time="2024-07-02T09:04:12.638102313Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:04:12.652544 containerd[1433]: time="2024-07-02T09:04:12.652500127Z" level=info msg="CreateContainer within sandbox \"b4e8bde3d32c8350ec0a86965752868cc125ee25cf8e96db8771d08ed41122c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"953fba1b58ea6c611ec931d4b4a1906204c86c70a5b16d2b483dea57d1058a81\"" Jul 2 09:04:12.653383 containerd[1433]: time="2024-07-02T09:04:12.653183091Z" level=info msg="StartContainer for \"953fba1b58ea6c611ec931d4b4a1906204c86c70a5b16d2b483dea57d1058a81\"" Jul 2 09:04:12.683529 systemd[1]: Started cri-containerd-953fba1b58ea6c611ec931d4b4a1906204c86c70a5b16d2b483dea57d1058a81.scope - libcontainer container 953fba1b58ea6c611ec931d4b4a1906204c86c70a5b16d2b483dea57d1058a81. Jul 2 09:04:12.707813 containerd[1433]: time="2024-07-02T09:04:12.707765607Z" level=info msg="StartContainer for \"953fba1b58ea6c611ec931d4b4a1906204c86c70a5b16d2b483dea57d1058a81\" returns successfully" Jul 2 09:04:12.956482 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 09:04:13.175424 systemd[1]: run-containerd-runc-k8s.io-953fba1b58ea6c611ec931d4b4a1906204c86c70a5b16d2b483dea57d1058a81-runc.ceG6bU.mount: Deactivated successfully. Jul 2 09:04:13.640018 kubelet[2521]: E0702 09:04:13.639525 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:13.652090 kubelet[2521]: I0702 09:04:13.652053 2521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qj7nj" podStartSLOduration=5.652015406 podStartE2EDuration="5.652015406s" podCreationTimestamp="2024-07-02 09:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:04:13.651460482 +0000 UTC m=+85.316838542" watchObservedRunningTime="2024-07-02 09:04:13.652015406 +0000 UTC m=+85.317393386" Jul 2 09:04:15.223514 kubelet[2521]: E0702 09:04:15.223478 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:15.675361 systemd-networkd[1363]: lxc_health: Link UP Jul 2 09:04:15.686401 systemd-networkd[1363]: lxc_health: Gained carrier Jul 2 09:04:17.224323 kubelet[2521]: E0702 09:04:17.224157 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:17.648733 kubelet[2521]: E0702 09:04:17.648461 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:17.718562 systemd-networkd[1363]: lxc_health: Gained IPv6LL Jul 2 09:04:18.649992 kubelet[2521]: E0702 09:04:18.649960 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:22.425039 kubelet[2521]: E0702 09:04:22.425001 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:04:23.869847 kubelet[2521]: E0702 09:04:23.869802 2521 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45860->127.0.0.1:45737: write tcp 127.0.0.1:45860->127.0.0.1:45737: write: broken pipe Jul 2 09:04:23.872684 sshd[4329]: pam_unix(sshd:session): session closed for user core Jul 2 09:04:23.876133 systemd[1]: sshd@25-10.0.0.47:22-10.0.0.1:48294.service: Deactivated successfully. Jul 2 09:04:23.877777 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 09:04:23.879019 systemd-logind[1416]: Session 26 logged out. Waiting for processes to exit. Jul 2 09:04:23.879787 systemd-logind[1416]: Removed session 26.