Jul 2 08:33:39.902640 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 08:33:39.902663 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:33:39.902673 kernel: KASLR enabled Jul 2 08:33:39.902679 kernel: efi: EFI v2.7 by EDK II Jul 2 08:33:39.902685 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 08:33:39.902691 kernel: random: crng init done Jul 2 08:33:39.902698 kernel: ACPI: Early table checksum verification disabled Jul 2 08:33:39.902704 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 08:33:39.902711 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 08:33:39.902718 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902725 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902731 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902737 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902751 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902760 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902768 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902775 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902782 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:33:39.902788 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 08:33:39.902794 kernel: NUMA: Failed to initialise from firmware Jul 2 08:33:39.902801 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:33:39.902808 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 2 08:33:39.902814 kernel: Zone ranges: Jul 2 08:33:39.902820 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:33:39.902827 kernel: DMA32 empty Jul 2 08:33:39.902834 kernel: Normal empty Jul 2 08:33:39.902841 kernel: Movable zone start for each node Jul 2 08:33:39.902847 kernel: Early memory node ranges Jul 2 08:33:39.902854 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 08:33:39.902860 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 08:33:39.902867 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 08:33:39.902873 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 08:33:39.902879 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 08:33:39.902886 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 08:33:39.902892 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 08:33:39.902899 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:33:39.902905 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 08:33:39.902913 kernel: psci: probing for conduit method from ACPI. Jul 2 08:33:39.902919 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 08:33:39.902926 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:33:39.902936 kernel: psci: Trusted OS migration not required Jul 2 08:33:39.902942 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:33:39.902950 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 08:33:39.902958 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:33:39.902965 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:33:39.902972 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 08:33:39.902979 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:33:39.902986 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:33:39.902993 kernel: CPU features: detected: Hardware dirty bit management Jul 2 08:33:39.903000 kernel: CPU features: detected: Spectre-v4 Jul 2 08:33:39.903007 kernel: CPU features: detected: Spectre-BHB Jul 2 08:33:39.903014 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 08:33:39.903022 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 08:33:39.903030 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 08:33:39.903037 kernel: alternatives: applying boot alternatives Jul 2 08:33:39.903045 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:33:39.903053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:33:39.903060 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:33:39.903067 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:33:39.903074 kernel: Fallback order for Node 0: 0 Jul 2 08:33:39.903080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 08:33:39.903087 kernel: Policy zone: DMA Jul 2 08:33:39.903094 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:33:39.903101 kernel: software IO TLB: area num 4. Jul 2 08:33:39.903109 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 08:33:39.903117 kernel: Memory: 2386848K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185440K reserved, 0K cma-reserved) Jul 2 08:33:39.903124 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 08:33:39.903131 kernel: trace event string verifier disabled Jul 2 08:33:39.903137 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:33:39.903145 kernel: rcu: RCU event tracing is enabled. Jul 2 08:33:39.903152 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 08:33:39.903159 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:33:39.903166 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:33:39.903173 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:33:39.903180 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 08:33:39.903187 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:33:39.903196 kernel: GICv3: 256 SPIs implemented Jul 2 08:33:39.903203 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:33:39.903210 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:33:39.903216 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 08:33:39.903223 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 08:33:39.903230 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 08:33:39.903237 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:33:39.903245 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:33:39.903252 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 08:33:39.903258 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 08:33:39.903265 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:33:39.903274 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:33:39.903281 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 08:33:39.903288 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 08:33:39.903295 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 08:33:39.903302 kernel: arm-pv: using stolen time PV Jul 2 08:33:39.903309 kernel: Console: colour dummy device 80x25 Jul 2 08:33:39.903316 kernel: ACPI: Core revision 20230628 Jul 2 08:33:39.903323 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 08:33:39.903330 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:33:39.903337 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:33:39.903346 kernel: SELinux: Initializing. Jul 2 08:33:39.903353 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:33:39.903360 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:33:39.903368 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:33:39.903375 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:33:39.903382 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:33:39.903389 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:33:39.903396 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 08:33:39.903403 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 08:33:39.903411 kernel: Remapping and enabling EFI services. Jul 2 08:33:39.903418 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:33:39.903425 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:33:39.903432 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 08:33:39.903439 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 08:33:39.903446 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:33:39.903453 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 08:33:39.903461 kernel: Detected PIPT I-cache on CPU2 Jul 2 08:33:39.903468 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 08:33:39.903475 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 08:33:39.903484 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:33:39.903491 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 08:33:39.903503 kernel: Detected PIPT I-cache on CPU3 Jul 2 08:33:39.903512 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 08:33:39.903520 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 08:33:39.903527 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:33:39.903535 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 08:33:39.903542 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 08:33:39.903550 kernel: SMP: Total of 4 processors activated. Jul 2 08:33:39.903646 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:33:39.903654 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 08:33:39.903661 kernel: CPU features: detected: Common not Private translations Jul 2 08:33:39.903669 kernel: CPU features: detected: CRC32 instructions Jul 2 08:33:39.903676 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 08:33:39.903684 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 08:33:39.903691 kernel: CPU features: detected: LSE atomic instructions Jul 2 08:33:39.903699 kernel: CPU features: detected: Privileged Access Never Jul 2 08:33:39.903708 kernel: CPU features: detected: RAS Extension Support Jul 2 08:33:39.903716 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 08:33:39.903723 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:33:39.903731 kernel: alternatives: applying system-wide alternatives Jul 2 08:33:39.903738 kernel: devtmpfs: initialized Jul 2 08:33:39.903753 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:33:39.903761 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 08:33:39.903769 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:33:39.903777 kernel: SMBIOS 3.0.0 present. Jul 2 08:33:39.903787 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 08:33:39.903794 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:33:39.903802 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:33:39.903809 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:33:39.903817 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:33:39.903825 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:33:39.903832 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 2 08:33:39.903840 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:33:39.903847 kernel: cpuidle: using governor menu Jul 2 08:33:39.903856 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:33:39.903864 kernel: ASID allocator initialised with 32768 entries Jul 2 08:33:39.903871 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:33:39.903879 kernel: Serial: AMBA PL011 UART driver Jul 2 08:33:39.903886 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 08:33:39.903894 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 08:33:39.903901 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:33:39.903908 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:33:39.903916 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:33:39.903926 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:33:39.903934 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:33:39.903941 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:33:39.903949 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:33:39.903956 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:33:39.903977 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:33:39.903984 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:33:39.903992 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:33:39.903999 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:33:39.904008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:33:39.904015 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:33:39.904023 kernel: ACPI: Interpreter enabled Jul 2 08:33:39.904031 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:33:39.904039 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:33:39.904047 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 08:33:39.904054 kernel: printk: console [ttyAMA0] enabled Jul 2 08:33:39.904062 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 08:33:39.904197 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:33:39.904271 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:33:39.904336 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:33:39.904397 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 08:33:39.904461 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 08:33:39.904471 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 08:33:39.904478 kernel: PCI host bridge to bus 0000:00 Jul 2 08:33:39.904565 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 08:33:39.904634 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:33:39.904691 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 08:33:39.904758 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 08:33:39.904838 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 08:33:39.904913 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 08:33:39.904979 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 08:33:39.905048 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 08:33:39.905111 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 08:33:39.905176 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 08:33:39.905241 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 08:33:39.905317 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 08:33:39.905376 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 08:33:39.905433 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:33:39.905495 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 08:33:39.905505 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:33:39.905513 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:33:39.905521 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:33:39.905529 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:33:39.905536 kernel: iommu: Default domain type: Translated Jul 2 08:33:39.905544 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:33:39.905560 kernel: efivars: Registered efivars operations Jul 2 08:33:39.905582 kernel: vgaarb: loaded Jul 2 08:33:39.905593 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:33:39.905601 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:33:39.905608 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:33:39.905616 kernel: pnp: PnP ACPI init Jul 2 08:33:39.905693 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 08:33:39.905705 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:33:39.905713 kernel: NET: Registered PF_INET protocol family Jul 2 08:33:39.905721 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:33:39.905731 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:33:39.905739 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:33:39.905753 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:33:39.905760 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:33:39.905768 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:33:39.905776 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:33:39.905783 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:33:39.905791 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:33:39.905798 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:33:39.905808 kernel: kvm [1]: HYP mode not available Jul 2 08:33:39.905815 kernel: Initialise system trusted keyrings Jul 2 08:33:39.905823 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:33:39.905830 kernel: Key type asymmetric registered Jul 2 08:33:39.905838 kernel: Asymmetric key parser 'x509' registered Jul 2 08:33:39.905845 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:33:39.905853 kernel: io scheduler mq-deadline registered Jul 2 08:33:39.905860 kernel: io scheduler kyber registered Jul 2 08:33:39.905868 kernel: io scheduler bfq registered Jul 2 08:33:39.905877 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:33:39.905884 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:33:39.905892 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:33:39.905962 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 08:33:39.905972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:33:39.905980 kernel: thunder_xcv, ver 1.0 Jul 2 08:33:39.905987 kernel: thunder_bgx, ver 1.0 Jul 2 08:33:39.905995 kernel: nicpf, ver 1.0 Jul 2 08:33:39.906002 kernel: nicvf, ver 1.0 Jul 2 08:33:39.906077 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:33:39.906142 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:33:39 UTC (1719909219) Jul 2 08:33:39.906152 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:33:39.906160 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 08:33:39.906167 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:33:39.906175 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:33:39.906183 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:33:39.906190 kernel: Segment Routing with IPv6 Jul 2 08:33:39.906200 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:33:39.906208 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:33:39.906216 kernel: Key type dns_resolver registered Jul 2 08:33:39.906223 kernel: registered taskstats version 1 Jul 2 08:33:39.906230 kernel: Loading compiled-in X.509 certificates Jul 2 08:33:39.906238 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:33:39.906246 kernel: Key type .fscrypt registered Jul 2 08:33:39.906253 kernel: Key type fscrypt-provisioning registered Jul 2 08:33:39.906265 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:33:39.906275 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:33:39.906282 kernel: ima: No architecture policies found Jul 2 08:33:39.906290 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:33:39.906298 kernel: clk: Disabling unused clocks Jul 2 08:33:39.906305 kernel: Freeing unused kernel memory: 39040K Jul 2 08:33:39.906313 kernel: Run /init as init process Jul 2 08:33:39.906320 kernel: with arguments: Jul 2 08:33:39.906327 kernel: /init Jul 2 08:33:39.906335 kernel: with environment: Jul 2 08:33:39.906343 kernel: HOME=/ Jul 2 08:33:39.906351 kernel: TERM=linux Jul 2 08:33:39.906358 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:33:39.906367 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:33:39.906377 systemd[1]: Detected virtualization kvm. Jul 2 08:33:39.906385 systemd[1]: Detected architecture arm64. Jul 2 08:33:39.906393 systemd[1]: Running in initrd. Jul 2 08:33:39.906402 systemd[1]: No hostname configured, using default hostname. Jul 2 08:33:39.906410 systemd[1]: Hostname set to . Jul 2 08:33:39.906419 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:33:39.906427 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:33:39.906435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:33:39.906444 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:33:39.906453 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:33:39.906461 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:33:39.906471 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:33:39.906480 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:33:39.906490 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:33:39.906499 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:33:39.906507 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:33:39.906515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:33:39.906524 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:33:39.906533 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:33:39.906541 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:33:39.906550 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:33:39.906575 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:33:39.906583 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:33:39.906592 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:33:39.906600 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:33:39.906608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:33:39.906617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:33:39.906628 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:33:39.906636 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:33:39.906644 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:33:39.906653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:33:39.906661 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:33:39.906669 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:33:39.906678 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:33:39.906686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:33:39.906695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:33:39.906704 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:33:39.906712 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:33:39.906720 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:33:39.906729 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:33:39.906738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:33:39.906752 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:33:39.906760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:33:39.906787 systemd-journald[238]: Collecting audit messages is disabled. Jul 2 08:33:39.906809 kernel: Bridge firewalling registered Jul 2 08:33:39.906817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:33:39.906826 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:33:39.906835 systemd-journald[238]: Journal started Jul 2 08:33:39.906854 systemd-journald[238]: Runtime Journal (/run/log/journal/692d6f59351841f59e89316c3fcc6d5f) is 5.9M, max 47.3M, 41.4M free. Jul 2 08:33:39.883355 systemd-modules-load[239]: Inserted module 'overlay' Jul 2 08:33:39.901657 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 2 08:33:39.909759 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:33:39.912613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:33:39.912650 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:33:39.917353 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:33:39.920048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:33:39.921229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:33:39.922948 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:33:39.927683 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:33:39.930369 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:33:39.932819 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:33:39.940586 dracut-cmdline[274]: dracut-dracut-053 Jul 2 08:33:39.942961 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:33:39.956086 systemd-resolved[276]: Positive Trust Anchors: Jul 2 08:33:39.956106 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:33:39.956137 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:33:39.960690 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 2 08:33:39.961569 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:33:39.963189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:33:40.004584 kernel: SCSI subsystem initialized Jul 2 08:33:40.009570 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:33:40.016574 kernel: iscsi: registered transport (tcp) Jul 2 08:33:40.029598 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:33:40.029642 kernel: QLogic iSCSI HBA Driver Jul 2 08:33:40.070082 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:33:40.080708 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:33:40.099612 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:33:40.099658 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:33:40.100821 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:33:40.146577 kernel: raid6: neonx8 gen() 15684 MB/s Jul 2 08:33:40.163567 kernel: raid6: neonx4 gen() 15631 MB/s Jul 2 08:33:40.180563 kernel: raid6: neonx2 gen() 13226 MB/s Jul 2 08:33:40.197564 kernel: raid6: neonx1 gen() 10425 MB/s Jul 2 08:33:40.214564 kernel: raid6: int64x8 gen() 6911 MB/s Jul 2 08:33:40.231580 kernel: raid6: int64x4 gen() 7290 MB/s Jul 2 08:33:40.248578 kernel: raid6: int64x2 gen() 6096 MB/s Jul 2 08:33:40.265580 kernel: raid6: int64x1 gen() 5034 MB/s Jul 2 08:33:40.265605 kernel: raid6: using algorithm neonx8 gen() 15684 MB/s Jul 2 08:33:40.282585 kernel: raid6: .... xor() 11885 MB/s, rmw enabled Jul 2 08:33:40.282607 kernel: raid6: using neon recovery algorithm Jul 2 08:33:40.287860 kernel: xor: measuring software checksum speed Jul 2 08:33:40.287875 kernel: 8regs : 19444 MB/sec Jul 2 08:33:40.288724 kernel: 32regs : 19725 MB/sec Jul 2 08:33:40.289911 kernel: arm64_neon : 27215 MB/sec Jul 2 08:33:40.289935 kernel: xor: using function: arm64_neon (27215 MB/sec) Jul 2 08:33:40.342583 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:33:40.353613 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:33:40.364763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:33:40.375641 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jul 2 08:33:40.378751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:33:40.380950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:33:40.395272 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jul 2 08:33:40.419885 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:33:40.434664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:33:40.473084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:33:40.478722 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:33:40.490541 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:33:40.492224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:33:40.493292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:33:40.495069 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:33:40.504990 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:33:40.513824 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:33:40.518569 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 08:33:40.530610 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 08:33:40.530726 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:33:40.530750 kernel: GPT:9289727 != 19775487 Jul 2 08:33:40.530762 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:33:40.530772 kernel: GPT:9289727 != 19775487 Jul 2 08:33:40.530788 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:33:40.530799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:33:40.524870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:33:40.524981 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:33:40.531643 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:33:40.532534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:33:40.532687 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:33:40.534625 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:33:40.545793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:33:40.549591 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (520) Jul 2 08:33:40.552614 kernel: BTRFS: device fsid 9b0eb482-485a-4aff-8de4-e09ff146eadf devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (509) Jul 2 08:33:40.560589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:33:40.565202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 08:33:40.569526 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 08:33:40.573764 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 08:33:40.577438 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 08:33:40.578372 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 08:33:40.592769 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:33:40.594288 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:33:40.598765 disk-uuid[551]: Primary Header is updated. Jul 2 08:33:40.598765 disk-uuid[551]: Secondary Entries is updated. Jul 2 08:33:40.598765 disk-uuid[551]: Secondary Header is updated. Jul 2 08:33:40.605203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:33:40.615066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:33:41.616639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:33:41.616767 disk-uuid[553]: The operation has completed successfully. Jul 2 08:33:41.642942 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:33:41.643041 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:33:41.659719 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:33:41.665455 sh[577]: Success Jul 2 08:33:41.686587 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:33:41.728089 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:33:41.729677 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:33:41.730432 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:33:41.740931 kernel: BTRFS info (device dm-0): first mount of filesystem 9b0eb482-485a-4aff-8de4-e09ff146eadf Jul 2 08:33:41.740967 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:33:41.740979 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:33:41.741772 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:33:41.742791 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:33:41.745867 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:33:41.747192 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:33:41.756738 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:33:41.758129 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:33:41.766136 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:33:41.766175 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:33:41.766193 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:33:41.768603 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:33:41.776668 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:33:41.778580 kernel: BTRFS info (device vda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:33:41.784456 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:33:41.793992 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:33:41.859313 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:33:41.870721 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:33:41.899089 systemd-networkd[760]: lo: Link UP Jul 2 08:33:41.899102 systemd-networkd[760]: lo: Gained carrier Jul 2 08:33:41.899835 systemd-networkd[760]: Enumeration completed Jul 2 08:33:41.900227 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:33:41.900230 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:33:41.901005 systemd-networkd[760]: eth0: Link UP Jul 2 08:33:41.901008 systemd-networkd[760]: eth0: Gained carrier Jul 2 08:33:41.901015 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:33:41.901666 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:33:41.902771 systemd[1]: Reached target network.target - Network. Jul 2 08:33:41.917147 ignition[669]: Ignition 2.18.0 Jul 2 08:33:41.917157 ignition[669]: Stage: fetch-offline Jul 2 08:33:41.917197 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:33:41.917209 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:33:41.917299 ignition[669]: parsed url from cmdline: "" Jul 2 08:33:41.920756 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 08:33:41.917302 ignition[669]: no config URL provided Jul 2 08:33:41.917307 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:33:41.917314 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:33:41.917341 ignition[669]: op(1): [started] loading QEMU firmware config module Jul 2 08:33:41.917345 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 08:33:41.923078 ignition[669]: op(1): [finished] loading QEMU firmware config module Jul 2 08:33:41.962924 ignition[669]: parsing config with SHA512: 80889060b50156b69f7c0bda2a20a24ae119bcf86878981a6625f8f69899a7ee42c4437471c60285129c41858a7e6a4689dc6c51be9fc8e53e5f10e138433f14 Jul 2 08:33:41.967755 unknown[669]: fetched base config from "system" Jul 2 08:33:41.967777 unknown[669]: fetched user config from "qemu" Jul 2 08:33:41.969886 ignition[669]: fetch-offline: fetch-offline passed Jul 2 08:33:41.969962 ignition[669]: Ignition finished successfully Jul 2 08:33:41.971536 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:33:41.972860 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 08:33:41.980744 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:33:41.991853 ignition[772]: Ignition 2.18.0 Jul 2 08:33:41.991862 ignition[772]: Stage: kargs Jul 2 08:33:41.992031 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:33:41.992040 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:33:41.992974 ignition[772]: kargs: kargs passed Jul 2 08:33:41.993026 ignition[772]: Ignition finished successfully Jul 2 08:33:41.995834 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:33:42.005781 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:33:42.016375 ignition[781]: Ignition 2.18.0 Jul 2 08:33:42.016385 ignition[781]: Stage: disks Jul 2 08:33:42.016537 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:33:42.019006 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:33:42.016546 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:33:42.020526 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:33:42.017489 ignition[781]: disks: disks passed Jul 2 08:33:42.022145 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:33:42.017533 ignition[781]: Ignition finished successfully Jul 2 08:33:42.024116 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:33:42.025950 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:33:42.027359 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:33:42.041736 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:33:42.051623 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:33:42.055542 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:33:42.068671 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:33:42.115482 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:33:42.116895 kernel: EXT4-fs (vda9): mounted filesystem 9aacfbff-cef8-4758-afb5-6310e7c6c5e6 r/w with ordered data mode. Quota mode: none. Jul 2 08:33:42.116619 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:33:42.133671 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:33:42.135245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:33:42.136137 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:33:42.136175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:33:42.143053 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Jul 2 08:33:42.136196 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:33:42.146751 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:33:42.146780 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:33:42.146791 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:33:42.142796 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:33:42.145211 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:33:42.150568 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:33:42.151326 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:33:42.190856 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:33:42.194665 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:33:42.198530 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:33:42.202631 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:33:42.271901 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:33:42.278640 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:33:42.280173 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:33:42.285579 kernel: BTRFS info (device vda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:33:42.301470 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:33:42.303227 ignition[913]: INFO : Ignition 2.18.0 Jul 2 08:33:42.303227 ignition[913]: INFO : Stage: mount Jul 2 08:33:42.303227 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:33:42.303227 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:33:42.303227 ignition[913]: INFO : mount: mount passed Jul 2 08:33:42.303227 ignition[913]: INFO : Ignition finished successfully Jul 2 08:33:42.305609 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:33:42.316664 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:33:42.740394 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:33:42.754783 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:33:42.759584 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Jul 2 08:33:42.762016 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:33:42.762041 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:33:42.762053 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:33:42.763582 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:33:42.764951 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:33:42.780602 ignition[946]: INFO : Ignition 2.18.0 Jul 2 08:33:42.780602 ignition[946]: INFO : Stage: files Jul 2 08:33:42.782178 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:33:42.782178 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:33:42.784534 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:33:42.784534 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:33:42.784534 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:33:42.788313 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:33:42.788313 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:33:42.788313 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:33:42.788313 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 08:33:42.788313 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 08:33:42.788313 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:33:42.788313 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:33:42.785220 unknown[946]: wrote ssh authorized keys file for user: core Jul 2 08:33:42.822089 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:33:42.866481 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:33:42.868859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:33:42.868859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 08:33:43.161573 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 08:33:43.245322 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:33:43.247195 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 08:33:43.482754 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 2 08:33:43.655963 systemd-networkd[760]: eth0: Gained IPv6LL Jul 2 08:33:43.688396 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:33:43.688396 ignition[946]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 2 08:33:43.691772 ignition[946]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 08:33:43.712749 ignition[946]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:33:43.716365 ignition[946]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:33:43.719044 ignition[946]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 08:33:43.719044 ignition[946]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:33:43.719044 ignition[946]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:33:43.719044 ignition[946]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:33:43.719044 ignition[946]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:33:43.719044 ignition[946]: INFO : files: files passed Jul 2 08:33:43.719044 ignition[946]: INFO : Ignition finished successfully Jul 2 08:33:43.720615 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:33:43.733753 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:33:43.735868 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:33:43.737707 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:33:43.737800 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:33:43.743255 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 08:33:43.745499 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:33:43.745499 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:33:43.748940 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:33:43.749270 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:33:43.751882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:33:43.760699 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:33:43.779634 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:33:43.779734 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:33:43.781600 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:33:43.783227 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:33:43.784746 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:33:43.785413 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:33:43.799476 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:33:43.810772 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:33:43.817927 systemd[1]: Stopped target network.target - Network. Jul 2 08:33:43.818629 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:33:43.820051 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:33:43.821691 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:33:43.823172 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:33:43.823276 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:33:43.825375 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:33:43.826169 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:33:43.827689 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:33:43.829249 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:33:43.830654 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:33:43.832218 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:33:43.833770 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:33:43.835423 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:33:43.837110 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:33:43.838730 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:33:43.840010 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:33:43.840115 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:33:43.842137 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:33:43.843689 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:33:43.845231 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:33:43.848637 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:33:43.849570 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:33:43.849672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:33:43.852149 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:33:43.852258 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:33:43.853881 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:33:43.855196 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:33:43.858647 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:33:43.859610 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:33:43.861395 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:33:43.862676 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:33:43.862763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:33:43.864022 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:33:43.864095 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:33:43.865417 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:33:43.865515 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:33:43.866939 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:33:43.867030 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:33:43.878298 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:33:43.879176 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:33:43.879304 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:33:43.881777 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:33:43.882813 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:33:43.886617 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:33:43.887528 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:33:43.891859 ignition[1000]: INFO : Ignition 2.18.0 Jul 2 08:33:43.891859 ignition[1000]: INFO : Stage: umount Jul 2 08:33:43.891859 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:33:43.891859 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:33:43.891859 ignition[1000]: INFO : umount: umount passed Jul 2 08:33:43.891859 ignition[1000]: INFO : Ignition finished successfully Jul 2 08:33:43.887665 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:33:43.889774 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:33:43.889871 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:33:43.893626 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:33:43.895581 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:33:43.896742 systemd-networkd[760]: eth0: DHCPv6 lease lost Jul 2 08:33:43.897730 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:33:43.898273 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:33:43.898360 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:33:43.901360 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:33:43.901464 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:33:43.906410 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:33:43.906492 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:33:43.909170 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:33:43.909203 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:33:43.910346 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:33:43.910392 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:33:43.912151 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:33:43.912191 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:33:43.913964 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:33:43.914004 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:33:43.915733 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:33:43.915774 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:33:43.932690 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:33:43.933530 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:33:43.933612 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:33:43.935570 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:33:43.935615 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:33:43.937325 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:33:43.937368 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:33:43.939466 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:33:43.939506 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:33:43.942351 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:33:43.956460 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:33:43.956591 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:33:43.959225 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:33:43.959374 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:33:43.961144 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:33:43.961211 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:33:43.962375 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:33:43.962405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:33:43.964490 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:33:43.964541 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:33:43.967218 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:33:43.967259 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:33:43.969765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:33:43.969807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:33:43.976558 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:33:43.977508 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:33:43.977577 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:33:43.979664 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:33:43.979717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:33:43.981727 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:33:43.981836 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:33:43.983525 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:33:43.983634 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:33:43.985747 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:33:43.987000 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:33:43.987055 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:33:43.989348 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:33:43.997903 systemd[1]: Switching root. Jul 2 08:33:44.022077 systemd-journald[238]: Journal stopped Jul 2 08:33:44.739022 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 2 08:33:44.739073 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:33:44.739085 kernel: SELinux: policy capability open_perms=1 Jul 2 08:33:44.739097 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:33:44.739106 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:33:44.739115 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:33:44.739125 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:33:44.739134 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:33:44.739143 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:33:44.739156 kernel: audit: type=1403 audit(1719909224.233:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:33:44.739167 systemd[1]: Successfully loaded SELinux policy in 33.563ms. Jul 2 08:33:44.739183 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.648ms. Jul 2 08:33:44.739196 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:33:44.739207 systemd[1]: Detected virtualization kvm. Jul 2 08:33:44.739218 systemd[1]: Detected architecture arm64. Jul 2 08:33:44.739228 systemd[1]: Detected first boot. Jul 2 08:33:44.739243 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:33:44.739254 zram_generator::config[1064]: No configuration found. Jul 2 08:33:44.739269 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:33:44.739280 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:33:44.739295 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 08:33:44.739307 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:33:44.739317 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:33:44.739328 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:33:44.739338 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:33:44.739348 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:33:44.739359 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:33:44.739370 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:33:44.739380 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:33:44.739392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:33:44.739403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:33:44.739414 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:33:44.739424 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:33:44.739434 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:33:44.739445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:33:44.739455 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 08:33:44.739465 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:33:44.739476 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:33:44.739488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:33:44.739499 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:33:44.739509 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:33:44.739521 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:33:44.739531 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:33:44.739541 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:33:44.739635 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:33:44.739650 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:33:44.739664 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:33:44.739681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:33:44.739693 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:33:44.739704 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:33:44.739714 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:33:44.739725 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:33:44.739735 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:33:44.739745 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:33:44.739760 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:33:44.739772 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:33:44.739783 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:33:44.739793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:33:44.739803 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:33:44.739814 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:33:44.739824 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:33:44.739834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:33:44.739847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:33:44.739857 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:33:44.739870 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:33:44.739881 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:33:44.739891 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 08:33:44.739902 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 08:33:44.739912 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:33:44.739923 kernel: fuse: init (API version 7.39) Jul 2 08:33:44.739932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:33:44.739942 kernel: loop: module loaded Jul 2 08:33:44.739954 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:33:44.739964 kernel: ACPI: bus type drm_connector registered Jul 2 08:33:44.739973 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:33:44.739984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:33:44.739994 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:33:44.740005 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:33:44.740015 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:33:44.740025 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:33:44.740035 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:33:44.740047 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:33:44.740057 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:33:44.740087 systemd-journald[1142]: Collecting audit messages is disabled. Jul 2 08:33:44.740108 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:33:44.740119 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:33:44.740132 systemd-journald[1142]: Journal started Jul 2 08:33:44.740152 systemd-journald[1142]: Runtime Journal (/run/log/journal/692d6f59351841f59e89316c3fcc6d5f) is 5.9M, max 47.3M, 41.4M free. Jul 2 08:33:44.742361 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:33:44.743332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:33:44.744826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:33:44.744980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:33:44.746431 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:33:44.746612 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:33:44.747884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:33:44.748035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:33:44.749403 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:33:44.749568 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:33:44.750831 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:33:44.751033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:33:44.752529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:33:44.754065 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:33:44.755532 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:33:44.766790 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:33:44.775627 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:33:44.777625 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:33:44.778880 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:33:44.780430 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:33:44.782634 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:33:44.783908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:33:44.785709 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:33:44.787367 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:33:44.792693 systemd-journald[1142]: Time spent on flushing to /var/log/journal/692d6f59351841f59e89316c3fcc6d5f is 23.430ms for 844 entries. Jul 2 08:33:44.792693 systemd-journald[1142]: System Journal (/var/log/journal/692d6f59351841f59e89316c3fcc6d5f) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:33:44.820029 systemd-journald[1142]: Received client request to flush runtime journal. Jul 2 08:33:44.800692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:33:44.803803 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:33:44.810682 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:33:44.811971 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:33:44.813216 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:33:44.814724 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:33:44.817470 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:33:44.827734 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:33:44.829258 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:33:44.830915 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:33:44.832536 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 2 08:33:44.832549 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 2 08:33:44.836794 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:33:44.846836 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:33:44.848135 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:33:44.865814 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:33:44.876692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:33:44.887770 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jul 2 08:33:44.887787 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jul 2 08:33:44.891237 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:33:45.185929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:33:45.196693 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:33:45.217789 systemd-udevd[1225]: Using default interface naming scheme 'v255'. Jul 2 08:33:45.230148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:33:45.242450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:33:45.265582 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1235) Jul 2 08:33:45.272629 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1227) Jul 2 08:33:45.274745 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:33:45.276352 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 2 08:33:45.314288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 08:33:45.318020 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:33:45.365800 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:33:45.371504 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:33:45.373488 systemd-networkd[1234]: lo: Link UP Jul 2 08:33:45.373493 systemd-networkd[1234]: lo: Gained carrier Jul 2 08:33:45.374157 systemd-networkd[1234]: Enumeration completed Jul 2 08:33:45.374676 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:33:45.375694 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:33:45.377876 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:33:45.377880 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:33:45.378451 systemd-networkd[1234]: eth0: Link UP Jul 2 08:33:45.378461 systemd-networkd[1234]: eth0: Gained carrier Jul 2 08:33:45.378473 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:33:45.380724 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:33:45.395005 lvm[1262]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:33:45.397669 systemd-networkd[1234]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 08:33:45.402596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:33:45.433861 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:33:45.435247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:33:45.443805 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:33:45.448352 lvm[1271]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:33:45.481938 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:33:45.483360 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:33:45.484607 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:33:45.484639 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:33:45.485622 systemd[1]: Reached target machines.target - Containers. Jul 2 08:33:45.487501 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:33:45.500706 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:33:45.502848 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:33:45.503919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:33:45.504758 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:33:45.506895 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:33:45.511473 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:33:45.514477 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:33:45.518793 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:33:45.522572 kernel: loop0: detected capacity change from 0 to 59672 Jul 2 08:33:45.523611 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:33:45.529380 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:33:45.530201 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:33:45.536569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:33:45.570606 kernel: loop1: detected capacity change from 0 to 113672 Jul 2 08:33:45.618584 kernel: loop2: detected capacity change from 0 to 193208 Jul 2 08:33:45.661642 kernel: loop3: detected capacity change from 0 to 59672 Jul 2 08:33:45.671613 kernel: loop4: detected capacity change from 0 to 113672 Jul 2 08:33:45.678592 kernel: loop5: detected capacity change from 0 to 193208 Jul 2 08:33:45.688628 (sd-merge)[1297]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 08:33:45.689015 (sd-merge)[1297]: Merged extensions into '/usr'. Jul 2 08:33:45.692409 systemd[1]: Reloading requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:33:45.692423 systemd[1]: Reloading... Jul 2 08:33:45.727707 zram_generator::config[1323]: No configuration found. Jul 2 08:33:45.757503 ldconfig[1280]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:33:45.821741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:33:45.865444 systemd[1]: Reloading finished in 172 ms. Jul 2 08:33:45.884329 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:33:45.885520 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:33:45.903771 systemd[1]: Starting ensure-sysext.service... Jul 2 08:33:45.905484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:33:45.909006 systemd[1]: Reloading requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:33:45.909022 systemd[1]: Reloading... Jul 2 08:33:45.921429 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:33:45.921719 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:33:45.922328 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:33:45.922541 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Jul 2 08:33:45.922603 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Jul 2 08:33:45.925385 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:33:45.925400 systemd-tmpfiles[1371]: Skipping /boot Jul 2 08:33:45.932333 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:33:45.932350 systemd-tmpfiles[1371]: Skipping /boot Jul 2 08:33:45.948579 zram_generator::config[1395]: No configuration found. Jul 2 08:33:46.032198 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:33:46.076076 systemd[1]: Reloading finished in 166 ms. Jul 2 08:33:46.088181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:33:46.110199 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:33:46.112015 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:33:46.117420 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:33:46.120366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:33:46.123384 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:33:46.131759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:33:46.132963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:33:46.137251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:33:46.140859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:33:46.142594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:33:46.145376 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:33:46.147506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:33:46.147657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:33:46.149735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:33:46.149870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:33:46.151802 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:33:46.151968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:33:46.159034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:33:46.167822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:33:46.172793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:33:46.174582 augenrules[1475]: No rules Jul 2 08:33:46.174949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:33:46.177363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:33:46.180674 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:33:46.182692 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:33:46.184334 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:33:46.185975 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:33:46.187604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:33:46.187753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:33:46.189357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:33:46.189497 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:33:46.191308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:33:46.191489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:33:46.193106 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:33:46.199757 systemd-resolved[1444]: Positive Trust Anchors: Jul 2 08:33:46.199773 systemd-resolved[1444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:33:46.199808 systemd-resolved[1444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:33:46.202466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:33:46.205419 systemd-resolved[1444]: Defaulting to hostname 'linux'. Jul 2 08:33:46.210849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:33:46.212802 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:33:46.214664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:33:46.217141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:33:46.218769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:33:46.219708 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:33:46.220297 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:33:46.221994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:33:46.222126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:33:46.223654 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:33:46.223786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:33:46.225139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:33:46.225267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:33:46.226795 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:33:46.226978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:33:46.231046 systemd[1]: Finished ensure-sysext.service. Jul 2 08:33:46.233762 systemd[1]: Reached target network.target - Network. Jul 2 08:33:46.234697 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:33:46.235939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:33:46.235994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:33:46.244742 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 08:33:46.283623 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 08:33:46.284466 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 08:33:46.284511 systemd-timesyncd[1512]: Initial clock synchronization to Tue 2024-07-02 08:33:46.479665 UTC. Jul 2 08:33:46.285108 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:33:46.286319 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:33:46.287545 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:33:46.288757 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:33:46.289957 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:33:46.289987 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:33:46.290853 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:33:46.291960 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:33:46.293088 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:33:46.294272 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:33:46.296662 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:33:46.298986 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:33:46.300954 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:33:46.309410 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:33:46.310228 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:33:46.310946 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:33:46.311723 systemd[1]: System is tainted: cgroupsv1 Jul 2 08:33:46.311754 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:33:46.311774 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:33:46.312794 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:33:46.314498 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:33:46.316144 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:33:46.318329 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:33:46.322207 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:33:46.326784 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:33:46.332324 jq[1518]: false Jul 2 08:33:46.331796 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:33:46.337153 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:33:46.340157 extend-filesystems[1520]: Found loop3 Jul 2 08:33:46.340286 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:33:46.341924 extend-filesystems[1520]: Found loop4 Jul 2 08:33:46.344616 extend-filesystems[1520]: Found loop5 Jul 2 08:33:46.344616 extend-filesystems[1520]: Found vda Jul 2 08:33:46.348625 extend-filesystems[1520]: Found vda1 Jul 2 08:33:46.348625 extend-filesystems[1520]: Found vda2 Jul 2 08:33:46.348625 extend-filesystems[1520]: Found vda3 Jul 2 08:33:46.348625 extend-filesystems[1520]: Found usr Jul 2 08:33:46.348625 extend-filesystems[1520]: Found vda4 Jul 2 08:33:46.348625 extend-filesystems[1520]: Found vda6 Jul 2 08:33:46.348625 extend-filesystems[1520]: Found vda7 Jul 2 08:33:46.348625 extend-filesystems[1520]: Found vda9 Jul 2 08:33:46.348625 extend-filesystems[1520]: Checking size of /dev/vda9 Jul 2 08:33:46.345438 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:33:46.353148 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:33:46.354373 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:33:46.358935 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:33:46.362827 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:33:46.363023 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:33:46.363274 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:33:46.363453 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:33:46.367696 extend-filesystems[1520]: Resized partition /dev/vda9 Jul 2 08:33:46.373410 jq[1541]: true Jul 2 08:33:46.375940 extend-filesystems[1546]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:33:46.377025 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:33:46.377218 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:33:46.380701 dbus-daemon[1517]: [system] SELinux support is enabled Jul 2 08:33:46.385888 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 08:33:46.383193 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:33:46.392567 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1232) Jul 2 08:33:46.398726 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:33:46.401840 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:33:46.401878 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:33:46.404782 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:33:46.404811 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:33:46.406879 update_engine[1539]: I0702 08:33:46.406630 1539 main.cc:92] Flatcar Update Engine starting Jul 2 08:33:46.408748 tar[1548]: linux-arm64/helm Jul 2 08:33:46.410071 jq[1550]: true Jul 2 08:33:46.411225 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:33:46.411341 update_engine[1539]: I0702 08:33:46.411304 1539 update_check_scheduler.cc:74] Next update check in 7m18s Jul 2 08:33:46.414210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:33:46.416923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:33:46.428052 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 08:33:46.441178 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:33:46.443022 extend-filesystems[1546]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 08:33:46.443022 extend-filesystems[1546]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:33:46.443022 extend-filesystems[1546]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 08:33:46.451317 extend-filesystems[1520]: Resized filesystem in /dev/vda9 Jul 2 08:33:46.443954 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:33:46.444191 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:33:46.445720 systemd-logind[1534]: New seat seat0. Jul 2 08:33:46.453653 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:33:46.487093 bash[1580]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:33:46.488879 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:33:46.492294 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 08:33:46.512660 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:33:46.612263 containerd[1551]: time="2024-07-02T08:33:46.612118920Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:33:46.618692 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:33:46.638934 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:33:46.641515 containerd[1551]: time="2024-07-02T08:33:46.641474520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:33:46.641628 containerd[1551]: time="2024-07-02T08:33:46.641606480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.642745880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.642779040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.642992240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.643008760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.643074080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.643114440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.643125600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.643180760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.643355400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.643372280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:33:46.643587 containerd[1551]: time="2024-07-02T08:33:46.643382160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643829 containerd[1551]: time="2024-07-02T08:33:46.643494200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:33:46.643829 containerd[1551]: time="2024-07-02T08:33:46.643507880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:33:46.643829 containerd[1551]: time="2024-07-02T08:33:46.643550560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:33:46.643829 containerd[1551]: time="2024-07-02T08:33:46.643580120Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:33:46.646775 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:33:46.647205 containerd[1551]: time="2024-07-02T08:33:46.647175800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:33:46.647244 containerd[1551]: time="2024-07-02T08:33:46.647210320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:33:46.647244 containerd[1551]: time="2024-07-02T08:33:46.647224640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:33:46.647300 containerd[1551]: time="2024-07-02T08:33:46.647256160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:33:46.647300 containerd[1551]: time="2024-07-02T08:33:46.647271680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:33:46.647300 containerd[1551]: time="2024-07-02T08:33:46.647281360Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:33:46.647300 containerd[1551]: time="2024-07-02T08:33:46.647295280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:33:46.647422 containerd[1551]: time="2024-07-02T08:33:46.647405240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:33:46.647448 containerd[1551]: time="2024-07-02T08:33:46.647426960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:33:46.647448 containerd[1551]: time="2024-07-02T08:33:46.647440120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:33:46.647482 containerd[1551]: time="2024-07-02T08:33:46.647459280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:33:46.647482 containerd[1551]: time="2024-07-02T08:33:46.647472760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:33:46.647514 containerd[1551]: time="2024-07-02T08:33:46.647488360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:33:46.647514 containerd[1551]: time="2024-07-02T08:33:46.647500600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:33:46.647561 containerd[1551]: time="2024-07-02T08:33:46.647513640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:33:46.647561 containerd[1551]: time="2024-07-02T08:33:46.647527240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:33:46.647561 containerd[1551]: time="2024-07-02T08:33:46.647539920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:33:46.647609 containerd[1551]: time="2024-07-02T08:33:46.647562920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:33:46.647609 containerd[1551]: time="2024-07-02T08:33:46.647575800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:33:46.647690 containerd[1551]: time="2024-07-02T08:33:46.647674200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:33:46.648208 containerd[1551]: time="2024-07-02T08:33:46.648180720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:33:46.648251 containerd[1551]: time="2024-07-02T08:33:46.648216320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648251 containerd[1551]: time="2024-07-02T08:33:46.648230800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:33:46.648298 containerd[1551]: time="2024-07-02T08:33:46.648251840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648371680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648386600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648400600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648412040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648423440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648436040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648447880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648459400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648701 containerd[1551]: time="2024-07-02T08:33:46.648472000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:33:46.648964 containerd[1551]: time="2024-07-02T08:33:46.648874320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648964 containerd[1551]: time="2024-07-02T08:33:46.648898280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648964 containerd[1551]: time="2024-07-02T08:33:46.648910400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648964 containerd[1551]: time="2024-07-02T08:33:46.648922080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648964 containerd[1551]: time="2024-07-02T08:33:46.648934360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648964 containerd[1551]: time="2024-07-02T08:33:46.648947280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.648964 containerd[1551]: time="2024-07-02T08:33:46.648960560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.649094 containerd[1551]: time="2024-07-02T08:33:46.648971440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:33:46.649383 containerd[1551]: time="2024-07-02T08:33:46.649323880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:33:46.649488 containerd[1551]: time="2024-07-02T08:33:46.649386520Z" level=info msg="Connect containerd service" Jul 2 08:33:46.649488 containerd[1551]: time="2024-07-02T08:33:46.649412360Z" level=info msg="using legacy CRI server" Jul 2 08:33:46.649488 containerd[1551]: time="2024-07-02T08:33:46.649418480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:33:46.649597 containerd[1551]: time="2024-07-02T08:33:46.649582360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:33:46.650121 containerd[1551]: time="2024-07-02T08:33:46.650080920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:33:46.650149 containerd[1551]: time="2024-07-02T08:33:46.650131600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:33:46.650168 containerd[1551]: time="2024-07-02T08:33:46.650148520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:33:46.650168 containerd[1551]: time="2024-07-02T08:33:46.650158960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:33:46.650215 containerd[1551]: time="2024-07-02T08:33:46.650171560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:33:46.650745 containerd[1551]: time="2024-07-02T08:33:46.650719040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:33:46.650789 containerd[1551]: time="2024-07-02T08:33:46.650770360Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:33:46.651114 containerd[1551]: time="2024-07-02T08:33:46.650722360Z" level=info msg="Start subscribing containerd event" Jul 2 08:33:46.651114 containerd[1551]: time="2024-07-02T08:33:46.650904160Z" level=info msg="Start recovering state" Jul 2 08:33:46.651114 containerd[1551]: time="2024-07-02T08:33:46.650970680Z" level=info msg="Start event monitor" Jul 2 08:33:46.651114 containerd[1551]: time="2024-07-02T08:33:46.650986680Z" level=info msg="Start snapshots syncer" Jul 2 08:33:46.651114 containerd[1551]: time="2024-07-02T08:33:46.650995200Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:33:46.651114 containerd[1551]: time="2024-07-02T08:33:46.651001760Z" level=info msg="Start streaming server" Jul 2 08:33:46.652012 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:33:46.653199 containerd[1551]: time="2024-07-02T08:33:46.653169080Z" level=info msg="containerd successfully booted in 0.042042s" Jul 2 08:33:46.653704 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:33:46.653904 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:33:46.663759 systemd-networkd[1234]: eth0: Gained IPv6LL Jul 2 08:33:46.665834 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:33:46.667135 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:33:46.668961 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:33:46.671897 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 08:33:46.679842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:33:46.682371 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:33:46.683883 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:33:46.692777 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:33:46.696849 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 08:33:46.698817 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:33:46.711153 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:33:46.712754 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 08:33:46.712946 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 08:33:46.716267 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:33:46.796462 tar[1548]: linux-arm64/LICENSE Jul 2 08:33:46.796585 tar[1548]: linux-arm64/README.md Jul 2 08:33:46.806982 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:33:47.150312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:33:47.151885 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:33:47.154783 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:33:47.156521 systemd[1]: Startup finished in 5.053s (kernel) + 2.958s (userspace) = 8.011s. Jul 2 08:33:47.622839 kubelet[1658]: E0702 08:33:47.622695 1658 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:33:47.625551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:33:47.625771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:33:52.813991 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:33:52.822791 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:34088.service - OpenSSH per-connection server daemon (10.0.0.1:34088). Jul 2 08:33:52.870382 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 34088 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:33:52.872060 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:33:52.880328 systemd-logind[1534]: New session 1 of user core. Jul 2 08:33:52.881192 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:33:52.897862 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:33:52.907649 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:33:52.909769 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:33:52.916261 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:33:52.996521 systemd[1678]: Queued start job for default target default.target. Jul 2 08:33:52.996896 systemd[1678]: Created slice app.slice - User Application Slice. Jul 2 08:33:52.996932 systemd[1678]: Reached target paths.target - Paths. Jul 2 08:33:52.996944 systemd[1678]: Reached target timers.target - Timers. Jul 2 08:33:53.006677 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:33:53.012703 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:33:53.012766 systemd[1678]: Reached target sockets.target - Sockets. Jul 2 08:33:53.012778 systemd[1678]: Reached target basic.target - Basic System. Jul 2 08:33:53.012816 systemd[1678]: Reached target default.target - Main User Target. Jul 2 08:33:53.012840 systemd[1678]: Startup finished in 91ms. Jul 2 08:33:53.013148 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:33:53.014517 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:33:53.077837 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:34090.service - OpenSSH per-connection server daemon (10.0.0.1:34090). Jul 2 08:33:53.109275 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 34090 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:33:53.110553 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:33:53.115513 systemd-logind[1534]: New session 2 of user core. Jul 2 08:33:53.122837 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:33:53.174842 sshd[1690]: pam_unix(sshd:session): session closed for user core Jul 2 08:33:53.190837 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:34100.service - OpenSSH per-connection server daemon (10.0.0.1:34100). Jul 2 08:33:53.191228 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:34090.service: Deactivated successfully. Jul 2 08:33:53.193354 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:33:53.194045 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:33:53.195351 systemd-logind[1534]: Removed session 2. Jul 2 08:33:53.222690 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 34100 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:33:53.224241 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:33:53.228084 systemd-logind[1534]: New session 3 of user core. Jul 2 08:33:53.235793 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:33:53.285116 sshd[1695]: pam_unix(sshd:session): session closed for user core Jul 2 08:33:53.294791 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:34110.service - OpenSSH per-connection server daemon (10.0.0.1:34110). Jul 2 08:33:53.295243 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:34100.service: Deactivated successfully. Jul 2 08:33:53.296551 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:33:53.297632 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:33:53.298742 systemd-logind[1534]: Removed session 3. Jul 2 08:33:53.325482 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 34110 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:33:53.326726 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:33:53.330627 systemd-logind[1534]: New session 4 of user core. Jul 2 08:33:53.349820 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:33:53.406316 sshd[1703]: pam_unix(sshd:session): session closed for user core Jul 2 08:33:53.413810 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:34126.service - OpenSSH per-connection server daemon (10.0.0.1:34126). Jul 2 08:33:53.414232 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:34110.service: Deactivated successfully. Jul 2 08:33:53.415888 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:33:53.416515 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:33:53.417887 systemd-logind[1534]: Removed session 4. Jul 2 08:33:53.444983 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 34126 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:33:53.446110 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:33:53.450087 systemd-logind[1534]: New session 5 of user core. Jul 2 08:33:53.458818 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:33:53.524017 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:33:53.524283 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:33:53.539587 sudo[1718]: pam_unix(sudo:session): session closed for user root Jul 2 08:33:53.541375 sshd[1711]: pam_unix(sshd:session): session closed for user core Jul 2 08:33:53.559856 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:34142.service - OpenSSH per-connection server daemon (10.0.0.1:34142). Jul 2 08:33:53.560247 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:34126.service: Deactivated successfully. Jul 2 08:33:53.562660 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:33:53.563148 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:33:53.564170 systemd-logind[1534]: Removed session 5. Jul 2 08:33:53.591527 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 34142 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:33:53.592710 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:33:53.596420 systemd-logind[1534]: New session 6 of user core. Jul 2 08:33:53.604806 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:33:53.657151 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:33:53.657395 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:33:53.660491 sudo[1728]: pam_unix(sudo:session): session closed for user root Jul 2 08:33:53.664883 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:33:53.665117 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:33:53.685934 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:33:53.687240 auditctl[1731]: No rules Jul 2 08:33:53.688074 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:33:53.688314 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:33:53.689995 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:33:53.712579 augenrules[1750]: No rules Jul 2 08:33:53.713801 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:33:53.715741 sudo[1727]: pam_unix(sudo:session): session closed for user root Jul 2 08:33:53.717272 sshd[1720]: pam_unix(sshd:session): session closed for user core Jul 2 08:33:53.726780 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:34144.service - OpenSSH per-connection server daemon (10.0.0.1:34144). Jul 2 08:33:53.727124 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:34142.service: Deactivated successfully. Jul 2 08:33:53.729369 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:33:53.729456 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:33:53.730969 systemd-logind[1534]: Removed session 6. Jul 2 08:33:53.759432 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 34144 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:33:53.760544 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:33:53.764281 systemd-logind[1534]: New session 7 of user core. Jul 2 08:33:53.775798 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:33:53.826009 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:33:53.826254 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:33:53.925996 (dockerd)[1773]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:33:53.926415 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:33:54.169726 dockerd[1773]: time="2024-07-02T08:33:54.169661850Z" level=info msg="Starting up" Jul 2 08:33:54.346012 dockerd[1773]: time="2024-07-02T08:33:54.345918447Z" level=info msg="Loading containers: start." Jul 2 08:33:54.414599 kernel: Initializing XFRM netlink socket Jul 2 08:33:54.484546 systemd-networkd[1234]: docker0: Link UP Jul 2 08:33:54.502821 dockerd[1773]: time="2024-07-02T08:33:54.502773551Z" level=info msg="Loading containers: done." Jul 2 08:33:54.555843 dockerd[1773]: time="2024-07-02T08:33:54.555791222Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:33:54.556016 dockerd[1773]: time="2024-07-02T08:33:54.555988213Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:33:54.556049 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2411495620-merged.mount: Deactivated successfully. Jul 2 08:33:54.556143 dockerd[1773]: time="2024-07-02T08:33:54.556103265Z" level=info msg="Daemon has completed initialization" Jul 2 08:33:54.582080 dockerd[1773]: time="2024-07-02T08:33:54.582002452Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:33:54.582191 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:33:55.126220 containerd[1551]: time="2024-07-02T08:33:55.126178754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 08:33:55.753699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035195143.mount: Deactivated successfully. Jul 2 08:33:57.762780 containerd[1551]: time="2024-07-02T08:33:57.762733810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:33:57.763357 containerd[1551]: time="2024-07-02T08:33:57.763304733Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jul 2 08:33:57.763938 containerd[1551]: time="2024-07-02T08:33:57.763912974Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:33:57.767262 containerd[1551]: time="2024-07-02T08:33:57.767215406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:33:57.768893 containerd[1551]: time="2024-07-02T08:33:57.768854746Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.642630192s" Jul 2 08:33:57.770581 containerd[1551]: time="2024-07-02T08:33:57.768990613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 08:33:57.790701 containerd[1551]: time="2024-07-02T08:33:57.790664468Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 08:33:57.876094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:33:57.887708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:33:57.977238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:33:57.981072 (kubelet)[1985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:33:58.035045 kubelet[1985]: E0702 08:33:58.034927 1985 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:33:58.039142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:33:58.039316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:33:59.122035 containerd[1551]: time="2024-07-02T08:33:59.121990140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:33:59.122586 containerd[1551]: time="2024-07-02T08:33:59.122364572Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jul 2 08:33:59.123125 containerd[1551]: time="2024-07-02T08:33:59.123094543Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:33:59.126093 containerd[1551]: time="2024-07-02T08:33:59.126036859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:33:59.127247 containerd[1551]: time="2024-07-02T08:33:59.127193960Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.33648752s" Jul 2 08:33:59.127247 containerd[1551]: time="2024-07-02T08:33:59.127230458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 08:33:59.145820 containerd[1551]: time="2024-07-02T08:33:59.145747374Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 08:34:00.112075 containerd[1551]: time="2024-07-02T08:34:00.111824642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:00.112768 containerd[1551]: time="2024-07-02T08:34:00.112480010Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jul 2 08:34:00.113571 containerd[1551]: time="2024-07-02T08:34:00.113477362Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:00.116640 containerd[1551]: time="2024-07-02T08:34:00.116593229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:00.117635 containerd[1551]: time="2024-07-02T08:34:00.117600343Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 971.815431ms" Jul 2 08:34:00.117635 containerd[1551]: time="2024-07-02T08:34:00.117631718Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 08:34:00.135902 containerd[1551]: time="2024-07-02T08:34:00.135874342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 08:34:01.096881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950228841.mount: Deactivated successfully. Jul 2 08:34:01.377771 containerd[1551]: time="2024-07-02T08:34:01.377611244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:01.378464 containerd[1551]: time="2024-07-02T08:34:01.378418873Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jul 2 08:34:01.379151 containerd[1551]: time="2024-07-02T08:34:01.379110506Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:01.381480 containerd[1551]: time="2024-07-02T08:34:01.381439521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:01.382109 containerd[1551]: time="2024-07-02T08:34:01.381928635Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.246021718s" Jul 2 08:34:01.382109 containerd[1551]: time="2024-07-02T08:34:01.381957463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 08:34:01.399828 containerd[1551]: time="2024-07-02T08:34:01.399786164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:34:01.766594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849352803.mount: Deactivated successfully. Jul 2 08:34:01.770504 containerd[1551]: time="2024-07-02T08:34:01.770462027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:01.771199 containerd[1551]: time="2024-07-02T08:34:01.771144346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 08:34:01.771935 containerd[1551]: time="2024-07-02T08:34:01.771869545Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:01.777211 containerd[1551]: time="2024-07-02T08:34:01.777136738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:01.778785 containerd[1551]: time="2024-07-02T08:34:01.778757336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 378.926083ms" Jul 2 08:34:01.778842 containerd[1551]: time="2024-07-02T08:34:01.778789496Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:34:01.797373 containerd[1551]: time="2024-07-02T08:34:01.797344120Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:34:02.250848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188687429.mount: Deactivated successfully. Jul 2 08:34:03.555780 containerd[1551]: time="2024-07-02T08:34:03.555713290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:03.556957 containerd[1551]: time="2024-07-02T08:34:03.556907482Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 08:34:03.557766 containerd[1551]: time="2024-07-02T08:34:03.557718814Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:03.561192 containerd[1551]: time="2024-07-02T08:34:03.561142292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:03.562278 containerd[1551]: time="2024-07-02T08:34:03.562237920Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.764753041s" Jul 2 08:34:03.562278 containerd[1551]: time="2024-07-02T08:34:03.562272059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 08:34:03.580623 containerd[1551]: time="2024-07-02T08:34:03.580582037Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 08:34:04.118185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930574049.mount: Deactivated successfully. Jul 2 08:34:04.517830 containerd[1551]: time="2024-07-02T08:34:04.517720920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:04.518621 containerd[1551]: time="2024-07-02T08:34:04.518534607Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jul 2 08:34:04.519242 containerd[1551]: time="2024-07-02T08:34:04.519211429Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:04.521920 containerd[1551]: time="2024-07-02T08:34:04.521863180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:04.522670 containerd[1551]: time="2024-07-02T08:34:04.522635924Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 942.0155ms" Jul 2 08:34:04.522733 containerd[1551]: time="2024-07-02T08:34:04.522671212Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 08:34:08.280412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:34:08.289757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:34:08.376271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:34:08.379730 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:34:08.419975 kubelet[2185]: E0702 08:34:08.419884 2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:34:08.422011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:34:08.422150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:34:08.718356 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:34:08.735765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:34:08.752198 systemd[1]: Reloading requested from client PID 2203 ('systemctl') (unit session-7.scope)... Jul 2 08:34:08.752217 systemd[1]: Reloading... Jul 2 08:34:08.816628 zram_generator::config[2240]: No configuration found. Jul 2 08:34:08.927603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:34:08.977264 systemd[1]: Reloading finished in 224 ms. Jul 2 08:34:09.011754 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:34:09.011811 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:34:09.012060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:34:09.013505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:34:09.102491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:34:09.106138 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:34:09.152319 kubelet[2297]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:34:09.152319 kubelet[2297]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:34:09.152319 kubelet[2297]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:34:09.152796 kubelet[2297]: I0702 08:34:09.152695 2297 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:34:10.181259 kubelet[2297]: I0702 08:34:10.180926 2297 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:34:10.181259 kubelet[2297]: I0702 08:34:10.180955 2297 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:34:10.181259 kubelet[2297]: I0702 08:34:10.181151 2297 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:34:10.310487 kubelet[2297]: I0702 08:34:10.310369 2297 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:34:10.311853 kubelet[2297]: E0702 08:34:10.311815 2297 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.322940 kubelet[2297]: W0702 08:34:10.322910 2297 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 08:34:10.323664 kubelet[2297]: I0702 08:34:10.323643 2297 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:34:10.323980 kubelet[2297]: I0702 08:34:10.323958 2297 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:34:10.324170 kubelet[2297]: I0702 08:34:10.324148 2297 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:34:10.324249 kubelet[2297]: I0702 08:34:10.324178 2297 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:34:10.324249 kubelet[2297]: I0702 08:34:10.324187 2297 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:34:10.324365 kubelet[2297]: I0702 08:34:10.324350 2297 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:34:10.325839 kubelet[2297]: I0702 08:34:10.325815 2297 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:34:10.325839 kubelet[2297]: I0702 08:34:10.325840 2297 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:34:10.326271 kubelet[2297]: I0702 08:34:10.326249 2297 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:34:10.326271 kubelet[2297]: I0702 08:34:10.326270 2297 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:34:10.326585 kubelet[2297]: W0702 08:34:10.326346 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.326585 kubelet[2297]: E0702 08:34:10.326406 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.326828 kubelet[2297]: W0702 08:34:10.326793 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.326920 kubelet[2297]: E0702 08:34:10.326907 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.327516 kubelet[2297]: I0702 08:34:10.327494 2297 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:34:10.332359 kubelet[2297]: W0702 08:34:10.332336 2297 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:34:10.335577 kubelet[2297]: I0702 08:34:10.335297 2297 server.go:1232] "Started kubelet" Jul 2 08:34:10.336106 kubelet[2297]: I0702 08:34:10.335677 2297 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:34:10.336106 kubelet[2297]: I0702 08:34:10.335777 2297 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:34:10.336106 kubelet[2297]: I0702 08:34:10.335883 2297 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:34:10.336106 kubelet[2297]: E0702 08:34:10.336041 2297 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:34:10.336106 kubelet[2297]: E0702 08:34:10.336061 2297 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:34:10.336576 kubelet[2297]: I0702 08:34:10.336542 2297 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:34:10.336939 kubelet[2297]: I0702 08:34:10.336873 2297 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:34:10.338066 kubelet[2297]: E0702 08:34:10.337969 2297 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de5859c8245a05", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 34, 10, 335267333, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 34, 10, 335267333, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.141:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.141:6443: connect: connection refused'(may retry after sleeping) Jul 2 08:34:10.338431 kubelet[2297]: E0702 08:34:10.338413 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:10.338507 kubelet[2297]: I0702 08:34:10.338439 2297 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:34:10.338547 kubelet[2297]: I0702 08:34:10.338514 2297 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:34:10.338605 kubelet[2297]: I0702 08:34:10.338578 2297 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:34:10.338872 kubelet[2297]: W0702 08:34:10.338825 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.338872 kubelet[2297]: E0702 08:34:10.338871 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.340105 kubelet[2297]: E0702 08:34:10.340080 2297 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" Jul 2 08:34:10.356560 kubelet[2297]: I0702 08:34:10.356505 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:34:10.357944 kubelet[2297]: I0702 08:34:10.357923 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:34:10.357944 kubelet[2297]: I0702 08:34:10.357944 2297 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:34:10.358024 kubelet[2297]: I0702 08:34:10.357960 2297 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:34:10.358024 kubelet[2297]: E0702 08:34:10.358004 2297 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:34:10.358395 kubelet[2297]: W0702 08:34:10.358314 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.358395 kubelet[2297]: E0702 08:34:10.358350 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:10.373307 kubelet[2297]: I0702 08:34:10.373278 2297 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:34:10.373636 kubelet[2297]: I0702 08:34:10.373396 2297 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:34:10.373636 kubelet[2297]: I0702 08:34:10.373414 2297 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:34:10.376484 kubelet[2297]: I0702 08:34:10.376453 2297 policy_none.go:49] "None policy: Start" Jul 2 08:34:10.377439 kubelet[2297]: I0702 08:34:10.377419 2297 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:34:10.377439 kubelet[2297]: I0702 08:34:10.377446 2297 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:34:10.382829 kubelet[2297]: I0702 08:34:10.382771 2297 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:34:10.383047 kubelet[2297]: I0702 08:34:10.383022 2297 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:34:10.383845 kubelet[2297]: E0702 08:34:10.383817 2297 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 08:34:10.439565 kubelet[2297]: I0702 08:34:10.439468 2297 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:34:10.440405 kubelet[2297]: E0702 08:34:10.440367 2297 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 2 08:34:10.458901 kubelet[2297]: I0702 08:34:10.458864 2297 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:34:10.459989 kubelet[2297]: I0702 08:34:10.459971 2297 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:34:10.460733 kubelet[2297]: I0702 08:34:10.460706 2297 topology_manager.go:215] "Topology Admit Handler" podUID="f6c14f9cd3d70869849d8cd138cad4f6" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:34:10.541017 kubelet[2297]: E0702 08:34:10.540981 2297 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" Jul 2 08:34:10.640514 kubelet[2297]: I0702 08:34:10.640465 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:10.640514 kubelet[2297]: I0702 08:34:10.640508 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:10.640636 kubelet[2297]: I0702 08:34:10.640530 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:10.640636 kubelet[2297]: I0702 08:34:10.640550 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6c14f9cd3d70869849d8cd138cad4f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6c14f9cd3d70869849d8cd138cad4f6\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:34:10.640636 kubelet[2297]: I0702 08:34:10.640586 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6c14f9cd3d70869849d8cd138cad4f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f6c14f9cd3d70869849d8cd138cad4f6\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:34:10.640636 kubelet[2297]: I0702 08:34:10.640604 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:10.640636 kubelet[2297]: I0702 08:34:10.640623 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:10.640741 kubelet[2297]: I0702 08:34:10.640642 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:34:10.640741 kubelet[2297]: I0702 08:34:10.640667 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6c14f9cd3d70869849d8cd138cad4f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6c14f9cd3d70869849d8cd138cad4f6\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:34:10.641691 kubelet[2297]: I0702 08:34:10.641664 2297 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:34:10.641977 kubelet[2297]: E0702 08:34:10.641950 2297 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 2 08:34:10.765987 kubelet[2297]: E0702 08:34:10.765893 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:10.765987 kubelet[2297]: E0702 08:34:10.765971 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:10.766643 kubelet[2297]: E0702 08:34:10.765894 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:10.766694 containerd[1551]: time="2024-07-02T08:34:10.766546529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 08:34:10.766963 containerd[1551]: time="2024-07-02T08:34:10.766723849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f6c14f9cd3d70869849d8cd138cad4f6,Namespace:kube-system,Attempt:0,}" Jul 2 08:34:10.767107 containerd[1551]: time="2024-07-02T08:34:10.767041048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 08:34:10.942464 kubelet[2297]: E0702 08:34:10.942332 2297 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" Jul 2 08:34:11.044205 kubelet[2297]: I0702 08:34:11.043788 2297 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:34:11.044205 kubelet[2297]: E0702 08:34:11.044067 2297 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 2 08:34:11.193279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583342365.mount: Deactivated successfully. Jul 2 08:34:11.199448 containerd[1551]: time="2024-07-02T08:34:11.199127074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:34:11.200356 containerd[1551]: time="2024-07-02T08:34:11.200322016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 08:34:11.202859 containerd[1551]: time="2024-07-02T08:34:11.202816524Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:34:11.204170 containerd[1551]: time="2024-07-02T08:34:11.204140794Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:34:11.204868 containerd[1551]: time="2024-07-02T08:34:11.204847613Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:34:11.206490 containerd[1551]: time="2024-07-02T08:34:11.206456925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:34:11.207440 containerd[1551]: time="2024-07-02T08:34:11.207391610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:34:11.208349 containerd[1551]: time="2024-07-02T08:34:11.208311920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:34:11.209165 containerd[1551]: time="2024-07-02T08:34:11.209140380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 442.488413ms" Jul 2 08:34:11.212647 kubelet[2297]: W0702 08:34:11.212551 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:11.212929 kubelet[2297]: E0702 08:34:11.212662 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:11.212955 containerd[1551]: time="2024-07-02T08:34:11.212670713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 445.867936ms" Jul 2 08:34:11.215174 containerd[1551]: time="2024-07-02T08:34:11.215020718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 447.901703ms" Jul 2 08:34:11.264866 kubelet[2297]: W0702 08:34:11.264814 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:11.264866 kubelet[2297]: E0702 08:34:11.264857 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:11.358347 containerd[1551]: time="2024-07-02T08:34:11.357925055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:34:11.358347 containerd[1551]: time="2024-07-02T08:34:11.357984034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:11.358347 containerd[1551]: time="2024-07-02T08:34:11.358001811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:34:11.358347 containerd[1551]: time="2024-07-02T08:34:11.358070679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:11.359033 containerd[1551]: time="2024-07-02T08:34:11.358818179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:34:11.359033 containerd[1551]: time="2024-07-02T08:34:11.358863544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:11.359033 containerd[1551]: time="2024-07-02T08:34:11.358876597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:34:11.359033 containerd[1551]: time="2024-07-02T08:34:11.358885686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:11.359707 containerd[1551]: time="2024-07-02T08:34:11.358990309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:34:11.359707 containerd[1551]: time="2024-07-02T08:34:11.359429944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:11.359707 containerd[1551]: time="2024-07-02T08:34:11.359458492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:34:11.359707 containerd[1551]: time="2024-07-02T08:34:11.359586819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:11.406235 containerd[1551]: time="2024-07-02T08:34:11.406152487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"a521cadd1c4cfa2e052a8867765a1e2d3d83032847f94710e893270e56ee6648\"" Jul 2 08:34:11.409620 kubelet[2297]: E0702 08:34:11.409597 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:11.410401 containerd[1551]: time="2024-07-02T08:34:11.409963778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a074a07e54f922275713d7507f4c3116fdfd318f2ce60f6effa7ef1b1177255\"" Jul 2 08:34:11.410653 containerd[1551]: time="2024-07-02T08:34:11.409996771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f6c14f9cd3d70869849d8cd138cad4f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2921f20f90c588ea7ed1e5f020da949150860fbd604dfb1ea01e7bf412950ae0\"" Jul 2 08:34:11.411617 kubelet[2297]: E0702 08:34:11.411527 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:11.411727 kubelet[2297]: E0702 08:34:11.411708 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:11.414355 containerd[1551]: time="2024-07-02T08:34:11.414309357Z" level=info msg="CreateContainer within sandbox \"8a074a07e54f922275713d7507f4c3116fdfd318f2ce60f6effa7ef1b1177255\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:34:11.415267 containerd[1551]: time="2024-07-02T08:34:11.415239477Z" level=info msg="CreateContainer within sandbox \"a521cadd1c4cfa2e052a8867765a1e2d3d83032847f94710e893270e56ee6648\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:34:11.416260 containerd[1551]: time="2024-07-02T08:34:11.416201589Z" level=info msg="CreateContainer within sandbox \"2921f20f90c588ea7ed1e5f020da949150860fbd604dfb1ea01e7bf412950ae0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:34:11.428872 containerd[1551]: time="2024-07-02T08:34:11.428835528Z" level=info msg="CreateContainer within sandbox \"8a074a07e54f922275713d7507f4c3116fdfd318f2ce60f6effa7ef1b1177255\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0cbe6f73c507d21a94a42d72c3db2e1be43a6f4df31d90c5e37d00d78bfe709\"" Jul 2 08:34:11.429513 containerd[1551]: time="2024-07-02T08:34:11.429484610Z" level=info msg="StartContainer for \"e0cbe6f73c507d21a94a42d72c3db2e1be43a6f4df31d90c5e37d00d78bfe709\"" Jul 2 08:34:11.438792 containerd[1551]: time="2024-07-02T08:34:11.438733761Z" level=info msg="CreateContainer within sandbox \"a521cadd1c4cfa2e052a8867765a1e2d3d83032847f94710e893270e56ee6648\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f3cf44aaa89c1ef783e1f3f96c0597d72c75ea015284a6dd4a0147e12b6137c7\"" Jul 2 08:34:11.439222 containerd[1551]: time="2024-07-02T08:34:11.439192134Z" level=info msg="StartContainer for \"f3cf44aaa89c1ef783e1f3f96c0597d72c75ea015284a6dd4a0147e12b6137c7\"" Jul 2 08:34:11.440826 containerd[1551]: time="2024-07-02T08:34:11.440781146Z" level=info msg="CreateContainer within sandbox \"2921f20f90c588ea7ed1e5f020da949150860fbd604dfb1ea01e7bf412950ae0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed2eafbd36d3e8dbc77bb297bebf6a8dc3d63348c9d0acb03b1353edf8a531e9\"" Jul 2 08:34:11.442258 containerd[1551]: time="2024-07-02T08:34:11.441158039Z" level=info msg="StartContainer for \"ed2eafbd36d3e8dbc77bb297bebf6a8dc3d63348c9d0acb03b1353edf8a531e9\"" Jul 2 08:34:11.485995 containerd[1551]: time="2024-07-02T08:34:11.485944907Z" level=info msg="StartContainer for \"e0cbe6f73c507d21a94a42d72c3db2e1be43a6f4df31d90c5e37d00d78bfe709\" returns successfully" Jul 2 08:34:11.512608 containerd[1551]: time="2024-07-02T08:34:11.511615504Z" level=info msg="StartContainer for \"f3cf44aaa89c1ef783e1f3f96c0597d72c75ea015284a6dd4a0147e12b6137c7\" returns successfully" Jul 2 08:34:11.512608 containerd[1551]: time="2024-07-02T08:34:11.511616545Z" level=info msg="StartContainer for \"ed2eafbd36d3e8dbc77bb297bebf6a8dc3d63348c9d0acb03b1353edf8a531e9\" returns successfully" Jul 2 08:34:11.662447 kubelet[2297]: W0702 08:34:11.662306 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:11.662447 kubelet[2297]: E0702 08:34:11.662376 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 2 08:34:11.846983 kubelet[2297]: I0702 08:34:11.846954 2297 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:34:12.366676 kubelet[2297]: E0702 08:34:12.366645 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:12.371741 kubelet[2297]: E0702 08:34:12.371600 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:12.373962 kubelet[2297]: E0702 08:34:12.373944 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:13.378613 kubelet[2297]: E0702 08:34:13.378577 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:13.450165 kubelet[2297]: E0702 08:34:13.450120 2297 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 08:34:13.550545 kubelet[2297]: I0702 08:34:13.550493 2297 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 08:34:13.558827 kubelet[2297]: E0702 08:34:13.558641 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:13.659053 kubelet[2297]: E0702 08:34:13.658960 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:13.736726 kubelet[2297]: E0702 08:34:13.736695 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:13.759466 kubelet[2297]: E0702 08:34:13.759435 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:13.859976 kubelet[2297]: E0702 08:34:13.859945 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:13.960305 kubelet[2297]: E0702 08:34:13.960171 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:14.060531 kubelet[2297]: E0702 08:34:14.060487 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:14.161055 kubelet[2297]: E0702 08:34:14.161020 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:14.261445 kubelet[2297]: E0702 08:34:14.261356 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:14.362211 kubelet[2297]: E0702 08:34:14.362174 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:14.462371 kubelet[2297]: E0702 08:34:14.462256 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:34:15.330632 kubelet[2297]: I0702 08:34:15.330590 2297 apiserver.go:52] "Watching apiserver" Jul 2 08:34:15.339202 kubelet[2297]: I0702 08:34:15.339168 2297 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:34:15.916329 kubelet[2297]: E0702 08:34:15.916303 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:16.219709 systemd[1]: Reloading requested from client PID 2571 ('systemctl') (unit session-7.scope)... Jul 2 08:34:16.219727 systemd[1]: Reloading... Jul 2 08:34:16.275574 zram_generator::config[2611]: No configuration found. Jul 2 08:34:16.379954 kubelet[2297]: E0702 08:34:16.379908 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:16.457804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:34:16.515186 systemd[1]: Reloading finished in 295 ms. Jul 2 08:34:16.540881 kubelet[2297]: I0702 08:34:16.540665 2297 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:34:16.540711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:34:16.556305 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:34:16.556686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:34:16.565828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:34:16.652997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:34:16.656059 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:34:16.697572 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:34:16.697572 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:34:16.697572 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:34:16.697917 kubelet[2660]: I0702 08:34:16.697605 2660 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:34:16.703721 kubelet[2660]: I0702 08:34:16.701942 2660 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:34:16.703721 kubelet[2660]: I0702 08:34:16.701971 2660 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:34:16.703721 kubelet[2660]: I0702 08:34:16.702137 2660 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:34:16.703721 kubelet[2660]: I0702 08:34:16.703607 2660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:34:16.705250 kubelet[2660]: I0702 08:34:16.704584 2660 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:34:16.709030 kubelet[2660]: W0702 08:34:16.709002 2660 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 08:34:16.709770 kubelet[2660]: I0702 08:34:16.709750 2660 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:34:16.710148 kubelet[2660]: I0702 08:34:16.710136 2660 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:34:16.710297 kubelet[2660]: I0702 08:34:16.710275 2660 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:34:16.710375 kubelet[2660]: I0702 08:34:16.710311 2660 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:34:16.710375 kubelet[2660]: I0702 08:34:16.710320 2660 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:34:16.710375 kubelet[2660]: I0702 08:34:16.710353 2660 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:34:16.710467 kubelet[2660]: I0702 08:34:16.710451 2660 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:34:16.710496 kubelet[2660]: I0702 08:34:16.710469 2660 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:34:16.710496 kubelet[2660]: I0702 08:34:16.710491 2660 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:34:16.710496 kubelet[2660]: I0702 08:34:16.710500 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:34:16.714592 kubelet[2660]: I0702 08:34:16.713939 2660 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:34:16.714592 kubelet[2660]: I0702 08:34:16.714476 2660 server.go:1232] "Started kubelet" Jul 2 08:34:16.717596 kubelet[2660]: I0702 08:34:16.715992 2660 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:34:16.717596 kubelet[2660]: I0702 08:34:16.716746 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:34:16.717596 kubelet[2660]: I0702 08:34:16.716903 2660 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:34:16.717772 kubelet[2660]: I0702 08:34:16.717648 2660 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:34:16.720562 kubelet[2660]: I0702 08:34:16.717859 2660 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:34:16.725853 kubelet[2660]: I0702 08:34:16.725755 2660 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:34:16.726133 kubelet[2660]: I0702 08:34:16.726063 2660 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:34:16.730589 kubelet[2660]: I0702 08:34:16.726206 2660 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:34:16.730589 kubelet[2660]: E0702 08:34:16.728864 2660 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:34:16.730589 kubelet[2660]: E0702 08:34:16.728909 2660 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:34:16.734784 sudo[2682]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:34:16.735029 sudo[2682]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:34:16.742570 kubelet[2660]: I0702 08:34:16.740464 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:34:16.743652 kubelet[2660]: I0702 08:34:16.743625 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:34:16.743652 kubelet[2660]: I0702 08:34:16.743659 2660 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:34:16.743747 kubelet[2660]: I0702 08:34:16.743674 2660 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:34:16.743771 kubelet[2660]: E0702 08:34:16.743762 2660 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:34:16.815963 kubelet[2660]: I0702 08:34:16.815801 2660 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:34:16.815963 kubelet[2660]: I0702 08:34:16.815848 2660 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:34:16.815963 kubelet[2660]: I0702 08:34:16.815865 2660 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:34:16.816132 kubelet[2660]: I0702 08:34:16.816070 2660 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:34:16.816132 kubelet[2660]: I0702 08:34:16.816098 2660 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:34:16.816132 kubelet[2660]: I0702 08:34:16.816105 2660 policy_none.go:49] "None policy: Start" Jul 2 08:34:16.817712 kubelet[2660]: I0702 08:34:16.817686 2660 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:34:16.817786 kubelet[2660]: I0702 08:34:16.817730 2660 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:34:16.817906 kubelet[2660]: I0702 08:34:16.817887 2660 state_mem.go:75] "Updated machine memory state" Jul 2 08:34:16.818982 kubelet[2660]: I0702 08:34:16.818961 2660 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:34:16.819768 kubelet[2660]: I0702 08:34:16.819567 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:34:16.830848 kubelet[2660]: I0702 08:34:16.830824 2660 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:34:16.839083 kubelet[2660]: I0702 08:34:16.839049 2660 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 08:34:16.839163 kubelet[2660]: I0702 08:34:16.839116 2660 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 08:34:16.844452 kubelet[2660]: I0702 08:34:16.844104 2660 topology_manager.go:215] "Topology Admit Handler" podUID="f6c14f9cd3d70869849d8cd138cad4f6" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:34:16.844452 kubelet[2660]: I0702 08:34:16.844194 2660 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:34:16.844452 kubelet[2660]: I0702 08:34:16.844228 2660 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:34:16.849853 kubelet[2660]: E0702 08:34:16.849722 2660 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:16.928302 kubelet[2660]: I0702 08:34:16.928277 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6c14f9cd3d70869849d8cd138cad4f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6c14f9cd3d70869849d8cd138cad4f6\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:34:16.928489 kubelet[2660]: I0702 08:34:16.928429 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:16.928489 kubelet[2660]: I0702 08:34:16.928455 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:34:16.928489 kubelet[2660]: I0702 08:34:16.928478 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:16.928610 kubelet[2660]: I0702 08:34:16.928514 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6c14f9cd3d70869849d8cd138cad4f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6c14f9cd3d70869849d8cd138cad4f6\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:34:16.928610 kubelet[2660]: I0702 08:34:16.928575 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6c14f9cd3d70869849d8cd138cad4f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f6c14f9cd3d70869849d8cd138cad4f6\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:34:16.928610 kubelet[2660]: I0702 08:34:16.928601 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:16.928676 kubelet[2660]: I0702 08:34:16.928624 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:16.928676 kubelet[2660]: I0702 08:34:16.928644 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:34:17.151537 kubelet[2660]: E0702 08:34:17.151149 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:17.151537 kubelet[2660]: E0702 08:34:17.151331 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:17.151537 kubelet[2660]: E0702 08:34:17.151509 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:17.195780 sudo[2682]: pam_unix(sudo:session): session closed for user root Jul 2 08:34:17.711079 kubelet[2660]: I0702 08:34:17.710992 2660 apiserver.go:52] "Watching apiserver" Jul 2 08:34:17.727042 kubelet[2660]: I0702 08:34:17.727007 2660 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:34:17.769010 kubelet[2660]: E0702 08:34:17.768978 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:17.770255 kubelet[2660]: E0702 08:34:17.769853 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:17.772799 kubelet[2660]: E0702 08:34:17.772254 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:17.788023 kubelet[2660]: I0702 08:34:17.787297 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.787247853 podCreationTimestamp="2024-07-02 08:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:34:17.787085341 +0000 UTC m=+1.127852372" watchObservedRunningTime="2024-07-02 08:34:17.787247853 +0000 UTC m=+1.128014884" Jul 2 08:34:17.803590 kubelet[2660]: I0702 08:34:17.803563 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.803521522 podCreationTimestamp="2024-07-02 08:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:34:17.795123231 +0000 UTC m=+1.135890262" watchObservedRunningTime="2024-07-02 08:34:17.803521522 +0000 UTC m=+1.144288553" Jul 2 08:34:17.809700 kubelet[2660]: I0702 08:34:17.809669 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.809642601 podCreationTimestamp="2024-07-02 08:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:34:17.803667307 +0000 UTC m=+1.144434338" watchObservedRunningTime="2024-07-02 08:34:17.809642601 +0000 UTC m=+1.150409632" Jul 2 08:34:18.770461 kubelet[2660]: E0702 08:34:18.770426 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:19.280533 sudo[1763]: pam_unix(sudo:session): session closed for user root Jul 2 08:34:19.282802 sshd[1756]: pam_unix(sshd:session): session closed for user core Jul 2 08:34:19.286229 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:34144.service: Deactivated successfully. Jul 2 08:34:19.289187 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:34:19.290121 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:34:19.290957 systemd-logind[1534]: Removed session 7. Jul 2 08:34:20.039877 kubelet[2660]: E0702 08:34:20.035606 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:23.883615 kubelet[2660]: E0702 08:34:23.883185 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:24.781938 kubelet[2660]: E0702 08:34:24.781848 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:25.110653 kubelet[2660]: E0702 08:34:25.110479 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:25.783681 kubelet[2660]: E0702 08:34:25.783256 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:25.783681 kubelet[2660]: E0702 08:34:25.783263 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:29.735069 kubelet[2660]: I0702 08:34:29.735046 2660 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:34:29.735482 containerd[1551]: time="2024-07-02T08:34:29.735339643Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:34:29.735772 kubelet[2660]: I0702 08:34:29.735503 2660 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:34:30.043068 kubelet[2660]: E0702 08:34:30.043047 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:30.436829 kubelet[2660]: I0702 08:34:30.433729 2660 topology_manager.go:215] "Topology Admit Handler" podUID="1464066c-a60e-4500-a2e8-c138f4081028" podNamespace="kube-system" podName="kube-proxy-ft84f" Jul 2 08:34:30.436829 kubelet[2660]: I0702 08:34:30.436474 2660 topology_manager.go:215] "Topology Admit Handler" podUID="b6272e50-d13e-48cb-9719-4458a0972cd9" podNamespace="kube-system" podName="cilium-rpx7t" Jul 2 08:34:30.514768 kubelet[2660]: I0702 08:34:30.514722 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cni-path\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.514892 kubelet[2660]: I0702 08:34:30.514796 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6272e50-d13e-48cb-9719-4458a0972cd9-hubble-tls\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.514892 kubelet[2660]: I0702 08:34:30.514818 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1464066c-a60e-4500-a2e8-c138f4081028-xtables-lock\") pod \"kube-proxy-ft84f\" (UID: \"1464066c-a60e-4500-a2e8-c138f4081028\") " pod="kube-system/kube-proxy-ft84f" Jul 2 08:34:30.514892 kubelet[2660]: I0702 08:34:30.514838 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsgps\" (UniqueName: \"kubernetes.io/projected/1464066c-a60e-4500-a2e8-c138f4081028-kube-api-access-tsgps\") pod \"kube-proxy-ft84f\" (UID: \"1464066c-a60e-4500-a2e8-c138f4081028\") " pod="kube-system/kube-proxy-ft84f" Jul 2 08:34:30.514892 kubelet[2660]: I0702 08:34:30.514866 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-run\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.514892 kubelet[2660]: I0702 08:34:30.514887 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-bpf-maps\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515410 kubelet[2660]: I0702 08:34:30.514907 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-hostproc\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515410 kubelet[2660]: I0702 08:34:30.514925 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-config-path\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515410 kubelet[2660]: I0702 08:34:30.514954 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1464066c-a60e-4500-a2e8-c138f4081028-kube-proxy\") pod \"kube-proxy-ft84f\" (UID: \"1464066c-a60e-4500-a2e8-c138f4081028\") " pod="kube-system/kube-proxy-ft84f" Jul 2 08:34:30.515410 kubelet[2660]: I0702 08:34:30.515105 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-cgroup\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515410 kubelet[2660]: I0702 08:34:30.515143 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6272e50-d13e-48cb-9719-4458a0972cd9-clustermesh-secrets\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515410 kubelet[2660]: I0702 08:34:30.515164 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-lib-modules\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515540 kubelet[2660]: I0702 08:34:30.515183 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1464066c-a60e-4500-a2e8-c138f4081028-lib-modules\") pod \"kube-proxy-ft84f\" (UID: \"1464066c-a60e-4500-a2e8-c138f4081028\") " pod="kube-system/kube-proxy-ft84f" Jul 2 08:34:30.515540 kubelet[2660]: I0702 08:34:30.515202 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-host-proc-sys-net\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515540 kubelet[2660]: I0702 08:34:30.515220 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-xtables-lock\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515540 kubelet[2660]: I0702 08:34:30.515239 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-host-proc-sys-kernel\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515540 kubelet[2660]: I0702 08:34:30.515266 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-etc-cni-netd\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.515675 kubelet[2660]: I0702 08:34:30.515327 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcxh7\" (UniqueName: \"kubernetes.io/projected/b6272e50-d13e-48cb-9719-4458a0972cd9-kube-api-access-dcxh7\") pod \"cilium-rpx7t\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " pod="kube-system/cilium-rpx7t" Jul 2 08:34:30.678359 kubelet[2660]: I0702 08:34:30.678311 2660 topology_manager.go:215] "Topology Admit Handler" podUID="2bbd4f2d-27ba-4a67-8040-c0cb821c9493" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-n9mhg" Jul 2 08:34:30.717838 kubelet[2660]: I0702 08:34:30.717681 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bbd4f2d-27ba-4a67-8040-c0cb821c9493-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-n9mhg\" (UID: \"2bbd4f2d-27ba-4a67-8040-c0cb821c9493\") " pod="kube-system/cilium-operator-6bc8ccdb58-n9mhg" Jul 2 08:34:30.718042 kubelet[2660]: I0702 08:34:30.718005 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5zl\" (UniqueName: \"kubernetes.io/projected/2bbd4f2d-27ba-4a67-8040-c0cb821c9493-kube-api-access-fp5zl\") pod \"cilium-operator-6bc8ccdb58-n9mhg\" (UID: \"2bbd4f2d-27ba-4a67-8040-c0cb821c9493\") " pod="kube-system/cilium-operator-6bc8ccdb58-n9mhg" Jul 2 08:34:30.744364 kubelet[2660]: E0702 08:34:30.744337 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:30.744987 containerd[1551]: time="2024-07-02T08:34:30.744877305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpx7t,Uid:b6272e50-d13e-48cb-9719-4458a0972cd9,Namespace:kube-system,Attempt:0,}" Jul 2 08:34:30.746005 kubelet[2660]: E0702 08:34:30.745986 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:30.746354 containerd[1551]: time="2024-07-02T08:34:30.746319925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ft84f,Uid:1464066c-a60e-4500-a2e8-c138f4081028,Namespace:kube-system,Attempt:0,}" Jul 2 08:34:30.770141 containerd[1551]: time="2024-07-02T08:34:30.769848804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:34:30.770141 containerd[1551]: time="2024-07-02T08:34:30.769934299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:30.770141 containerd[1551]: time="2024-07-02T08:34:30.769965905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:34:30.770141 containerd[1551]: time="2024-07-02T08:34:30.769980787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:30.770479 containerd[1551]: time="2024-07-02T08:34:30.770421227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:34:30.770479 containerd[1551]: time="2024-07-02T08:34:30.770466955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:30.770528 containerd[1551]: time="2024-07-02T08:34:30.770485158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:34:30.770528 containerd[1551]: time="2024-07-02T08:34:30.770501321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:30.801032 containerd[1551]: time="2024-07-02T08:34:30.800961649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ft84f,Uid:1464066c-a60e-4500-a2e8-c138f4081028,Namespace:kube-system,Attempt:0,} returns sandbox id \"80951dc470bc19b9162c3ce357a59036eaa8e46b8c44723a551efc0d74556b49\"" Jul 2 08:34:30.801834 containerd[1551]: time="2024-07-02T08:34:30.801797600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpx7t,Uid:b6272e50-d13e-48cb-9719-4458a0972cd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\"" Jul 2 08:34:30.802313 kubelet[2660]: E0702 08:34:30.802290 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:30.802418 kubelet[2660]: E0702 08:34:30.802398 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:30.804937 containerd[1551]: time="2024-07-02T08:34:30.804742330Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:34:30.807968 containerd[1551]: time="2024-07-02T08:34:30.807925664Z" level=info msg="CreateContainer within sandbox \"80951dc470bc19b9162c3ce357a59036eaa8e46b8c44723a551efc0d74556b49\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:34:30.823837 containerd[1551]: time="2024-07-02T08:34:30.823794002Z" level=info msg="CreateContainer within sandbox \"80951dc470bc19b9162c3ce357a59036eaa8e46b8c44723a551efc0d74556b49\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d64eae196aa219ac3e6a91973f721bdb16f7c70d1a920a7b1155d3039af5d840\"" Jul 2 08:34:30.824791 containerd[1551]: time="2024-07-02T08:34:30.824740693Z" level=info msg="StartContainer for \"d64eae196aa219ac3e6a91973f721bdb16f7c70d1a920a7b1155d3039af5d840\"" Jul 2 08:34:30.872214 containerd[1551]: time="2024-07-02T08:34:30.872118669Z" level=info msg="StartContainer for \"d64eae196aa219ac3e6a91973f721bdb16f7c70d1a920a7b1155d3039af5d840\" returns successfully" Jul 2 08:34:30.983853 kubelet[2660]: E0702 08:34:30.982188 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:30.983945 containerd[1551]: time="2024-07-02T08:34:30.983880404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-n9mhg,Uid:2bbd4f2d-27ba-4a67-8040-c0cb821c9493,Namespace:kube-system,Attempt:0,}" Jul 2 08:34:31.005426 containerd[1551]: time="2024-07-02T08:34:31.004202352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:34:31.005426 containerd[1551]: time="2024-07-02T08:34:31.004257922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:31.005426 containerd[1551]: time="2024-07-02T08:34:31.004274885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:34:31.005426 containerd[1551]: time="2024-07-02T08:34:31.004297168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:31.043101 containerd[1551]: time="2024-07-02T08:34:31.043034119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-n9mhg,Uid:2bbd4f2d-27ba-4a67-8040-c0cb821c9493,Namespace:kube-system,Attempt:0,} returns sandbox id \"d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60\"" Jul 2 08:34:31.043952 kubelet[2660]: E0702 08:34:31.043930 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:31.795344 kubelet[2660]: E0702 08:34:31.795146 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:32.156784 update_engine[1539]: I0702 08:34:32.156579 1539 update_attempter.cc:509] Updating boot flags... Jul 2 08:34:32.255742 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3033) Jul 2 08:34:32.296652 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2978) Jul 2 08:34:33.392259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873067169.mount: Deactivated successfully. Jul 2 08:34:34.942341 containerd[1551]: time="2024-07-02T08:34:34.942284697Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:34.942758 containerd[1551]: time="2024-07-02T08:34:34.942672914Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651538" Jul 2 08:34:34.943515 containerd[1551]: time="2024-07-02T08:34:34.943486674Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:34.945069 containerd[1551]: time="2024-07-02T08:34:34.945036143Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.140257966s" Jul 2 08:34:34.945110 containerd[1551]: time="2024-07-02T08:34:34.945078349Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 08:34:34.945801 containerd[1551]: time="2024-07-02T08:34:34.945639472Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:34:34.952155 containerd[1551]: time="2024-07-02T08:34:34.952123908Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:34:34.963167 containerd[1551]: time="2024-07-02T08:34:34.963119970Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\"" Jul 2 08:34:34.964497 containerd[1551]: time="2024-07-02T08:34:34.963734621Z" level=info msg="StartContainer for \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\"" Jul 2 08:34:35.006679 containerd[1551]: time="2024-07-02T08:34:35.005355253Z" level=info msg="StartContainer for \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\" returns successfully" Jul 2 08:34:35.056449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9-rootfs.mount: Deactivated successfully. Jul 2 08:34:35.086781 containerd[1551]: time="2024-07-02T08:34:35.086714848Z" level=info msg="shim disconnected" id=1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9 namespace=k8s.io Jul 2 08:34:35.086781 containerd[1551]: time="2024-07-02T08:34:35.086784377Z" level=warning msg="cleaning up after shim disconnected" id=1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9 namespace=k8s.io Jul 2 08:34:35.086932 containerd[1551]: time="2024-07-02T08:34:35.086797819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:34:35.808993 kubelet[2660]: E0702 08:34:35.808783 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:35.811774 containerd[1551]: time="2024-07-02T08:34:35.811740349Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:34:35.827537 kubelet[2660]: I0702 08:34:35.827488 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ft84f" podStartSLOduration=5.827453557 podCreationTimestamp="2024-07-02 08:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:34:31.807735224 +0000 UTC m=+15.148502255" watchObservedRunningTime="2024-07-02 08:34:35.827453557 +0000 UTC m=+19.168220588" Jul 2 08:34:35.838833 containerd[1551]: time="2024-07-02T08:34:35.838790111Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\"" Jul 2 08:34:35.840113 containerd[1551]: time="2024-07-02T08:34:35.839613707Z" level=info msg="StartContainer for \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\"" Jul 2 08:34:35.896822 containerd[1551]: time="2024-07-02T08:34:35.895878054Z" level=info msg="StartContainer for \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\" returns successfully" Jul 2 08:34:35.914637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:34:35.915340 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:34:35.915408 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:34:35.923205 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:34:35.957659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:34:35.961983 containerd[1551]: time="2024-07-02T08:34:35.961920737Z" level=info msg="shim disconnected" id=5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd namespace=k8s.io Jul 2 08:34:35.961983 containerd[1551]: time="2024-07-02T08:34:35.961978865Z" level=warning msg="cleaning up after shim disconnected" id=5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd namespace=k8s.io Jul 2 08:34:35.961983 containerd[1551]: time="2024-07-02T08:34:35.961987106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:34:36.098214 containerd[1551]: time="2024-07-02T08:34:36.097379712Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:36.098214 containerd[1551]: time="2024-07-02T08:34:36.097877459Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138374" Jul 2 08:34:36.098947 containerd[1551]: time="2024-07-02T08:34:36.098838068Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:34:36.100324 containerd[1551]: time="2024-07-02T08:34:36.100250537Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.15457398s" Jul 2 08:34:36.100324 containerd[1551]: time="2024-07-02T08:34:36.100292903Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 08:34:36.102037 containerd[1551]: time="2024-07-02T08:34:36.102002692Z" level=info msg="CreateContainer within sandbox \"d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:34:36.110844 containerd[1551]: time="2024-07-02T08:34:36.110798151Z" level=info msg="CreateContainer within sandbox \"d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\"" Jul 2 08:34:36.111217 containerd[1551]: time="2024-07-02T08:34:36.111193764Z" level=info msg="StartContainer for \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\"" Jul 2 08:34:36.166883 containerd[1551]: time="2024-07-02T08:34:36.166820141Z" level=info msg="StartContainer for \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\" returns successfully" Jul 2 08:34:36.810600 kubelet[2660]: E0702 08:34:36.810454 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:36.814773 kubelet[2660]: E0702 08:34:36.814537 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:36.818102 containerd[1551]: time="2024-07-02T08:34:36.817948543Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:34:36.846405 kubelet[2660]: I0702 08:34:36.846354 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-n9mhg" podStartSLOduration=1.7914851120000002 podCreationTimestamp="2024-07-02 08:34:30 +0000 UTC" firstStartedPulling="2024-07-02 08:34:31.045848121 +0000 UTC m=+14.386615152" lastFinishedPulling="2024-07-02 08:34:36.100678834 +0000 UTC m=+19.441445825" observedRunningTime="2024-07-02 08:34:36.821292071 +0000 UTC m=+20.162059102" watchObservedRunningTime="2024-07-02 08:34:36.846315785 +0000 UTC m=+20.187082816" Jul 2 08:34:36.849547 containerd[1551]: time="2024-07-02T08:34:36.849098958Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\"" Jul 2 08:34:36.850669 containerd[1551]: time="2024-07-02T08:34:36.849894225Z" level=info msg="StartContainer for \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\"" Jul 2 08:34:36.925502 kubelet[2660]: E0702 08:34:36.925474 2660 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podb6272e50-d13e-48cb-9719-4458a0972cd9/653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\": RecentStats: unable to find data in memory cache]" Jul 2 08:34:36.935613 containerd[1551]: time="2024-07-02T08:34:36.935509101Z" level=info msg="StartContainer for \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\" returns successfully" Jul 2 08:34:36.966753 containerd[1551]: time="2024-07-02T08:34:36.966687441Z" level=info msg="shim disconnected" id=653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37 namespace=k8s.io Jul 2 08:34:36.966753 containerd[1551]: time="2024-07-02T08:34:36.966742208Z" level=warning msg="cleaning up after shim disconnected" id=653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37 namespace=k8s.io Jul 2 08:34:36.966753 containerd[1551]: time="2024-07-02T08:34:36.966751569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:34:37.820081 kubelet[2660]: E0702 08:34:37.818380 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:37.820081 kubelet[2660]: E0702 08:34:37.818453 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:37.820478 containerd[1551]: time="2024-07-02T08:34:37.820237826Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:34:37.841034 containerd[1551]: time="2024-07-02T08:34:37.840989642Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\"" Jul 2 08:34:37.841815 containerd[1551]: time="2024-07-02T08:34:37.841787184Z" level=info msg="StartContainer for \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\"" Jul 2 08:34:37.893677 containerd[1551]: time="2024-07-02T08:34:37.893631737Z" level=info msg="StartContainer for \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\" returns successfully" Jul 2 08:34:37.911744 containerd[1551]: time="2024-07-02T08:34:37.911642962Z" level=info msg="shim disconnected" id=0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60 namespace=k8s.io Jul 2 08:34:37.911744 containerd[1551]: time="2024-07-02T08:34:37.911693728Z" level=warning msg="cleaning up after shim disconnected" id=0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60 namespace=k8s.io Jul 2 08:34:37.911744 containerd[1551]: time="2024-07-02T08:34:37.911705610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:34:37.959704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60-rootfs.mount: Deactivated successfully. Jul 2 08:34:38.823550 kubelet[2660]: E0702 08:34:38.823502 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:38.829383 containerd[1551]: time="2024-07-02T08:34:38.828921160Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:34:38.844744 containerd[1551]: time="2024-07-02T08:34:38.844691728Z" level=info msg="CreateContainer within sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\"" Jul 2 08:34:38.845254 containerd[1551]: time="2024-07-02T08:34:38.845161945Z" level=info msg="StartContainer for \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\"" Jul 2 08:34:38.918367 containerd[1551]: time="2024-07-02T08:34:38.918322528Z" level=info msg="StartContainer for \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\" returns successfully" Jul 2 08:34:39.076643 kubelet[2660]: I0702 08:34:39.076255 2660 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 08:34:39.095934 kubelet[2660]: I0702 08:34:39.095827 2660 topology_manager.go:215] "Topology Admit Handler" podUID="5f11680a-337b-4fea-9594-15154c32bf36" podNamespace="kube-system" podName="coredns-5dd5756b68-q2mfd" Jul 2 08:34:39.096188 kubelet[2660]: I0702 08:34:39.096132 2660 topology_manager.go:215] "Topology Admit Handler" podUID="917a31c3-4b2c-4410-b6d9-8054b0e956c5" podNamespace="kube-system" podName="coredns-5dd5756b68-lkq7r" Jul 2 08:34:39.173494 kubelet[2660]: I0702 08:34:39.173450 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/917a31c3-4b2c-4410-b6d9-8054b0e956c5-config-volume\") pod \"coredns-5dd5756b68-lkq7r\" (UID: \"917a31c3-4b2c-4410-b6d9-8054b0e956c5\") " pod="kube-system/coredns-5dd5756b68-lkq7r" Jul 2 08:34:39.173494 kubelet[2660]: I0702 08:34:39.173498 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkh7\" (UniqueName: \"kubernetes.io/projected/917a31c3-4b2c-4410-b6d9-8054b0e956c5-kube-api-access-mmkh7\") pod \"coredns-5dd5756b68-lkq7r\" (UID: \"917a31c3-4b2c-4410-b6d9-8054b0e956c5\") " pod="kube-system/coredns-5dd5756b68-lkq7r" Jul 2 08:34:39.173680 kubelet[2660]: I0702 08:34:39.173519 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f11680a-337b-4fea-9594-15154c32bf36-config-volume\") pod \"coredns-5dd5756b68-q2mfd\" (UID: \"5f11680a-337b-4fea-9594-15154c32bf36\") " pod="kube-system/coredns-5dd5756b68-q2mfd" Jul 2 08:34:39.173680 kubelet[2660]: I0702 08:34:39.173547 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkl2f\" (UniqueName: \"kubernetes.io/projected/5f11680a-337b-4fea-9594-15154c32bf36-kube-api-access-kkl2f\") pod \"coredns-5dd5756b68-q2mfd\" (UID: \"5f11680a-337b-4fea-9594-15154c32bf36\") " pod="kube-system/coredns-5dd5756b68-q2mfd" Jul 2 08:34:39.399388 kubelet[2660]: E0702 08:34:39.398702 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:39.399479 containerd[1551]: time="2024-07-02T08:34:39.399403758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q2mfd,Uid:5f11680a-337b-4fea-9594-15154c32bf36,Namespace:kube-system,Attempt:0,}" Jul 2 08:34:39.408392 kubelet[2660]: E0702 08:34:39.408115 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:39.408904 containerd[1551]: time="2024-07-02T08:34:39.408683923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lkq7r,Uid:917a31c3-4b2c-4410-b6d9-8054b0e956c5,Namespace:kube-system,Attempt:0,}" Jul 2 08:34:39.831369 kubelet[2660]: E0702 08:34:39.831295 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:39.859299 kubelet[2660]: I0702 08:34:39.859245 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rpx7t" podStartSLOduration=5.7178065 podCreationTimestamp="2024-07-02 08:34:30 +0000 UTC" firstStartedPulling="2024-07-02 08:34:30.804011558 +0000 UTC m=+14.144778549" lastFinishedPulling="2024-07-02 08:34:34.945412879 +0000 UTC m=+18.286179910" observedRunningTime="2024-07-02 08:34:39.858462374 +0000 UTC m=+23.199229405" watchObservedRunningTime="2024-07-02 08:34:39.859207861 +0000 UTC m=+23.199974892" Jul 2 08:34:40.832970 kubelet[2660]: E0702 08:34:40.832943 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:41.197836 systemd-networkd[1234]: cilium_host: Link UP Jul 2 08:34:41.197978 systemd-networkd[1234]: cilium_net: Link UP Jul 2 08:34:41.197981 systemd-networkd[1234]: cilium_net: Gained carrier Jul 2 08:34:41.198978 systemd-networkd[1234]: cilium_host: Gained carrier Jul 2 08:34:41.199463 systemd-networkd[1234]: cilium_host: Gained IPv6LL Jul 2 08:34:41.273246 systemd-networkd[1234]: cilium_vxlan: Link UP Jul 2 08:34:41.273250 systemd-networkd[1234]: cilium_vxlan: Gained carrier Jul 2 08:34:41.471689 systemd-networkd[1234]: cilium_net: Gained IPv6LL Jul 2 08:34:41.561676 kernel: NET: Registered PF_ALG protocol family Jul 2 08:34:41.834462 kubelet[2660]: E0702 08:34:41.834276 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:42.126043 systemd-networkd[1234]: lxc_health: Link UP Jul 2 08:34:42.136389 systemd-networkd[1234]: lxc_health: Gained carrier Jul 2 08:34:42.603479 systemd-networkd[1234]: lxcb48a31fea6fe: Link UP Jul 2 08:34:42.610133 systemd-networkd[1234]: lxcf2fd74fe2da6: Link UP Jul 2 08:34:42.623840 kernel: eth0: renamed from tmpc705c Jul 2 08:34:42.631876 kernel: eth0: renamed from tmp7b1b5 Jul 2 08:34:42.637609 systemd-networkd[1234]: lxcf2fd74fe2da6: Gained carrier Jul 2 08:34:42.641792 systemd-networkd[1234]: lxcb48a31fea6fe: Gained carrier Jul 2 08:34:42.836251 kubelet[2660]: E0702 08:34:42.836208 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:43.112778 systemd-networkd[1234]: cilium_vxlan: Gained IPv6LL Jul 2 08:34:43.243511 systemd-networkd[1234]: lxc_health: Gained IPv6LL Jul 2 08:34:43.816026 systemd-networkd[1234]: lxcf2fd74fe2da6: Gained IPv6LL Jul 2 08:34:44.584749 systemd-networkd[1234]: lxcb48a31fea6fe: Gained IPv6LL Jul 2 08:34:46.179909 containerd[1551]: time="2024-07-02T08:34:46.179809473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:34:46.179909 containerd[1551]: time="2024-07-02T08:34:46.179879839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:46.179909 containerd[1551]: time="2024-07-02T08:34:46.179898721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:34:46.179909 containerd[1551]: time="2024-07-02T08:34:46.179912042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:46.180667 containerd[1551]: time="2024-07-02T08:34:46.180598462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:34:46.180667 containerd[1551]: time="2024-07-02T08:34:46.180653747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:46.180834 containerd[1551]: time="2024-07-02T08:34:46.180673589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:34:46.180834 containerd[1551]: time="2024-07-02T08:34:46.180690190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:34:46.199073 systemd[1]: run-containerd-runc-k8s.io-7b1b5a7179820c18b047d3d418146c939106b590b384d34fe28a0704360710b6-runc.ZV1Mmh.mount: Deactivated successfully. Jul 2 08:34:46.206071 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:34:46.209278 systemd-resolved[1444]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:34:46.231275 containerd[1551]: time="2024-07-02T08:34:46.230471275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q2mfd,Uid:5f11680a-337b-4fea-9594-15154c32bf36,Namespace:kube-system,Attempt:0,} returns sandbox id \"c705c3242767641d769fda7a1b90a398a173b4094b85a8729b6e51bb24e8e24b\"" Jul 2 08:34:46.231366 containerd[1551]: time="2024-07-02T08:34:46.231325390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lkq7r,Uid:917a31c3-4b2c-4410-b6d9-8054b0e956c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b1b5a7179820c18b047d3d418146c939106b590b384d34fe28a0704360710b6\"" Jul 2 08:34:46.231436 kubelet[2660]: E0702 08:34:46.231414 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:46.232542 kubelet[2660]: E0702 08:34:46.232517 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:46.233683 containerd[1551]: time="2024-07-02T08:34:46.233537544Z" level=info msg="CreateContainer within sandbox \"c705c3242767641d769fda7a1b90a398a173b4094b85a8729b6e51bb24e8e24b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:34:46.235061 containerd[1551]: time="2024-07-02T08:34:46.234950507Z" level=info msg="CreateContainer within sandbox \"7b1b5a7179820c18b047d3d418146c939106b590b384d34fe28a0704360710b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:34:46.245805 containerd[1551]: time="2024-07-02T08:34:46.245748294Z" level=info msg="CreateContainer within sandbox \"c705c3242767641d769fda7a1b90a398a173b4094b85a8729b6e51bb24e8e24b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3c9e9e05704dba9b7446e3ca3db9afbb5e4d084d746483ae0e317de907b1034\"" Jul 2 08:34:46.246755 containerd[1551]: time="2024-07-02T08:34:46.246714059Z" level=info msg="StartContainer for \"b3c9e9e05704dba9b7446e3ca3db9afbb5e4d084d746483ae0e317de907b1034\"" Jul 2 08:34:46.250794 containerd[1551]: time="2024-07-02T08:34:46.250747412Z" level=info msg="CreateContainer within sandbox \"7b1b5a7179820c18b047d3d418146c939106b590b384d34fe28a0704360710b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a08e063d26979ccd9fb024e09fb4bd0f6fca55a0a0846e20404828e4bd802e7c\"" Jul 2 08:34:46.251315 containerd[1551]: time="2024-07-02T08:34:46.251293900Z" level=info msg="StartContainer for \"a08e063d26979ccd9fb024e09fb4bd0f6fca55a0a0846e20404828e4bd802e7c\"" Jul 2 08:34:46.299507 containerd[1551]: time="2024-07-02T08:34:46.299459563Z" level=info msg="StartContainer for \"b3c9e9e05704dba9b7446e3ca3db9afbb5e4d084d746483ae0e317de907b1034\" returns successfully" Jul 2 08:34:46.300303 containerd[1551]: time="2024-07-02T08:34:46.299653700Z" level=info msg="StartContainer for \"a08e063d26979ccd9fb024e09fb4bd0f6fca55a0a0846e20404828e4bd802e7c\" returns successfully" Jul 2 08:34:46.845129 kubelet[2660]: E0702 08:34:46.845084 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:46.847306 kubelet[2660]: E0702 08:34:46.847281 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:46.876936 kubelet[2660]: I0702 08:34:46.876869 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q2mfd" podStartSLOduration=16.876762897 podCreationTimestamp="2024-07-02 08:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:34:46.861879632 +0000 UTC m=+30.202646703" watchObservedRunningTime="2024-07-02 08:34:46.876762897 +0000 UTC m=+30.217529968" Jul 2 08:34:46.884698 kubelet[2660]: I0702 08:34:46.884655 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lkq7r" podStartSLOduration=16.884617186 podCreationTimestamp="2024-07-02 08:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:34:46.874890773 +0000 UTC m=+30.215657804" watchObservedRunningTime="2024-07-02 08:34:46.884617186 +0000 UTC m=+30.225384217" Jul 2 08:34:47.849621 kubelet[2660]: E0702 08:34:47.849212 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:47.850012 kubelet[2660]: E0702 08:34:47.849997 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:48.850712 kubelet[2660]: E0702 08:34:48.850670 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:48.852133 kubelet[2660]: E0702 08:34:48.852077 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:50.104827 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:55062.service - OpenSSH per-connection server daemon (10.0.0.1:55062). Jul 2 08:34:50.143122 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 55062 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:34:50.144715 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:34:50.150339 systemd-logind[1534]: New session 8 of user core. Jul 2 08:34:50.160828 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:34:50.310794 sshd[4058]: pam_unix(sshd:session): session closed for user core Jul 2 08:34:50.314504 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:55062.service: Deactivated successfully. Jul 2 08:34:50.317801 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:34:50.318573 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:34:50.319991 systemd-logind[1534]: Removed session 8. Jul 2 08:34:55.320798 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:35654.service - OpenSSH per-connection server daemon (10.0.0.1:35654). Jul 2 08:34:55.353346 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 35654 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:34:55.354634 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:34:55.358964 systemd-logind[1534]: New session 9 of user core. Jul 2 08:34:55.365849 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:34:55.483079 sshd[4082]: pam_unix(sshd:session): session closed for user core Jul 2 08:34:55.485986 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:34:55.486197 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:35654.service: Deactivated successfully. Jul 2 08:34:55.488802 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:34:55.489776 systemd-logind[1534]: Removed session 9. Jul 2 08:34:56.282504 kubelet[2660]: I0702 08:34:56.282378 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:34:56.283887 kubelet[2660]: E0702 08:34:56.283708 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:34:56.868754 kubelet[2660]: E0702 08:34:56.868684 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:00.494888 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:33840.service - OpenSSH per-connection server daemon (10.0.0.1:33840). Jul 2 08:35:00.527419 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 33840 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:00.528585 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:00.532417 systemd-logind[1534]: New session 10 of user core. Jul 2 08:35:00.544050 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:35:00.649299 sshd[4098]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:00.658842 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:33846.service - OpenSSH per-connection server daemon (10.0.0.1:33846). Jul 2 08:35:00.659369 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:33840.service: Deactivated successfully. Jul 2 08:35:00.661769 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:35:00.662490 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:35:00.663508 systemd-logind[1534]: Removed session 10. Jul 2 08:35:00.693508 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 33846 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:00.694803 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:00.699520 systemd-logind[1534]: New session 11 of user core. Jul 2 08:35:00.706783 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:35:01.394430 sshd[4112]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:01.401645 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:33852.service - OpenSSH per-connection server daemon (10.0.0.1:33852). Jul 2 08:35:01.404855 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:33846.service: Deactivated successfully. Jul 2 08:35:01.412748 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:35:01.414081 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:35:01.417737 systemd-logind[1534]: Removed session 11. Jul 2 08:35:01.447107 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 33852 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:01.448381 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:01.452458 systemd-logind[1534]: New session 12 of user core. Jul 2 08:35:01.464838 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:35:01.576655 sshd[4127]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:01.579943 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:33852.service: Deactivated successfully. Jul 2 08:35:01.582220 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:35:01.582809 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:35:01.583891 systemd-logind[1534]: Removed session 12. Jul 2 08:35:06.590773 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:33854.service - OpenSSH per-connection server daemon (10.0.0.1:33854). Jul 2 08:35:06.621713 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 33854 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:06.623336 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:06.626823 systemd-logind[1534]: New session 13 of user core. Jul 2 08:35:06.635848 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:35:06.740531 sshd[4147]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:06.743505 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:33854.service: Deactivated successfully. Jul 2 08:35:06.745795 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:35:06.745863 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:35:06.747133 systemd-logind[1534]: Removed session 13. Jul 2 08:35:11.751787 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:50550.service - OpenSSH per-connection server daemon (10.0.0.1:50550). Jul 2 08:35:11.782824 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 50550 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:11.784037 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:11.787487 systemd-logind[1534]: New session 14 of user core. Jul 2 08:35:11.796856 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 08:35:11.901684 sshd[4162]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:11.914860 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:50558.service - OpenSSH per-connection server daemon (10.0.0.1:50558). Jul 2 08:35:11.915287 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:50550.service: Deactivated successfully. Jul 2 08:35:11.917821 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:35:11.918068 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:35:11.919822 systemd-logind[1534]: Removed session 14. Jul 2 08:35:11.946153 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 50558 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:11.947307 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:11.952136 systemd-logind[1534]: New session 15 of user core. Jul 2 08:35:11.966790 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 08:35:12.174871 sshd[4174]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:12.183771 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:50568.service - OpenSSH per-connection server daemon (10.0.0.1:50568). Jul 2 08:35:12.184152 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:50558.service: Deactivated successfully. Jul 2 08:35:12.187013 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:35:12.187079 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:35:12.188418 systemd-logind[1534]: Removed session 15. Jul 2 08:35:12.218626 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 50568 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:12.220330 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:12.224845 systemd-logind[1534]: New session 16 of user core. Jul 2 08:35:12.234781 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 08:35:13.029142 sshd[4187]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:13.041650 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:50578.service - OpenSSH per-connection server daemon (10.0.0.1:50578). Jul 2 08:35:13.042033 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:50568.service: Deactivated successfully. Jul 2 08:35:13.048765 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:35:13.052775 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:35:13.057212 systemd-logind[1534]: Removed session 16. Jul 2 08:35:13.079041 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 50578 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:13.080271 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:13.084027 systemd-logind[1534]: New session 17 of user core. Jul 2 08:35:13.094792 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 08:35:13.410638 sshd[4208]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:13.423254 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:50588.service - OpenSSH per-connection server daemon (10.0.0.1:50588). Jul 2 08:35:13.424457 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:50578.service: Deactivated successfully. Jul 2 08:35:13.426746 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:35:13.430940 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:35:13.433086 systemd-logind[1534]: Removed session 17. Jul 2 08:35:13.466099 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 50588 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:13.467317 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:13.471816 systemd-logind[1534]: New session 18 of user core. Jul 2 08:35:13.484800 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 08:35:13.594525 sshd[4221]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:13.598280 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:50588.service: Deactivated successfully. Jul 2 08:35:13.600411 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:35:13.600701 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:35:13.602975 systemd-logind[1534]: Removed session 18. Jul 2 08:35:18.604793 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:50602.service - OpenSSH per-connection server daemon (10.0.0.1:50602). Jul 2 08:35:18.635820 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 50602 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:18.637288 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:18.641232 systemd-logind[1534]: New session 19 of user core. Jul 2 08:35:18.649780 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 08:35:18.755899 sshd[4245]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:18.759501 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:50602.service: Deactivated successfully. Jul 2 08:35:18.762534 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:35:18.762821 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:35:18.763927 systemd-logind[1534]: Removed session 19. Jul 2 08:35:23.773966 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:54290.service - OpenSSH per-connection server daemon (10.0.0.1:54290). Jul 2 08:35:23.804998 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 54290 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:23.805754 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:23.810186 systemd-logind[1534]: New session 20 of user core. Jul 2 08:35:23.815775 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 08:35:23.929099 sshd[4260]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:23.932112 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:54290.service: Deactivated successfully. Jul 2 08:35:23.934651 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:35:23.935291 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:35:23.937852 systemd-logind[1534]: Removed session 20. Jul 2 08:35:28.949811 systemd[1]: Started sshd@20-10.0.0.141:22-10.0.0.1:54302.service - OpenSSH per-connection server daemon (10.0.0.1:54302). Jul 2 08:35:28.980873 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 54302 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:28.982832 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:28.988537 systemd-logind[1534]: New session 21 of user core. Jul 2 08:35:28.998835 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 08:35:29.109147 sshd[4275]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:29.115947 systemd[1]: Started sshd@21-10.0.0.141:22-10.0.0.1:54304.service - OpenSSH per-connection server daemon (10.0.0.1:54304). Jul 2 08:35:29.117001 systemd[1]: sshd@20-10.0.0.141:22-10.0.0.1:54302.service: Deactivated successfully. Jul 2 08:35:29.118618 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:35:29.119844 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:35:29.120903 systemd-logind[1534]: Removed session 21. Jul 2 08:35:29.148188 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 54304 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:29.149257 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:29.153711 systemd-logind[1534]: New session 22 of user core. Jul 2 08:35:29.164802 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 08:35:30.874783 containerd[1551]: time="2024-07-02T08:35:30.874718532Z" level=info msg="StopContainer for \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\" with timeout 30 (s)" Jul 2 08:35:30.887305 systemd[1]: run-containerd-runc-k8s.io-ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d-runc.yUKrs6.mount: Deactivated successfully. Jul 2 08:35:30.889410 containerd[1551]: time="2024-07-02T08:35:30.889308173Z" level=info msg="Stop container \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\" with signal terminated" Jul 2 08:35:30.900180 containerd[1551]: time="2024-07-02T08:35:30.900115325Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:35:30.908712 containerd[1551]: time="2024-07-02T08:35:30.908660855Z" level=info msg="StopContainer for \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\" with timeout 2 (s)" Jul 2 08:35:30.908970 containerd[1551]: time="2024-07-02T08:35:30.908942212Z" level=info msg="Stop container \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\" with signal terminated" Jul 2 08:35:30.914181 systemd-networkd[1234]: lxc_health: Link DOWN Jul 2 08:35:30.914188 systemd-networkd[1234]: lxc_health: Lost carrier Jul 2 08:35:30.924886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca-rootfs.mount: Deactivated successfully. Jul 2 08:35:30.934021 containerd[1551]: time="2024-07-02T08:35:30.933808449Z" level=info msg="shim disconnected" id=67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca namespace=k8s.io Jul 2 08:35:30.934021 containerd[1551]: time="2024-07-02T08:35:30.933865328Z" level=warning msg="cleaning up after shim disconnected" id=67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca namespace=k8s.io Jul 2 08:35:30.934021 containerd[1551]: time="2024-07-02T08:35:30.933873888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:35:30.952317 containerd[1551]: time="2024-07-02T08:35:30.952250058Z" level=info msg="StopContainer for \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\" returns successfully" Jul 2 08:35:30.955541 containerd[1551]: time="2024-07-02T08:35:30.955498631Z" level=info msg="StopPodSandbox for \"d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60\"" Jul 2 08:35:30.957100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d-rootfs.mount: Deactivated successfully. Jul 2 08:35:30.958938 containerd[1551]: time="2024-07-02T08:35:30.955596711Z" level=info msg="Container to stop \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:35:30.961869 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60-shm.mount: Deactivated successfully. Jul 2 08:35:30.965027 containerd[1551]: time="2024-07-02T08:35:30.964965794Z" level=info msg="shim disconnected" id=ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d namespace=k8s.io Jul 2 08:35:30.965253 containerd[1551]: time="2024-07-02T08:35:30.965154472Z" level=warning msg="cleaning up after shim disconnected" id=ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d namespace=k8s.io Jul 2 08:35:30.965253 containerd[1551]: time="2024-07-02T08:35:30.965172152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:35:30.980605 containerd[1551]: time="2024-07-02T08:35:30.980538306Z" level=info msg="StopContainer for \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\" returns successfully" Jul 2 08:35:30.981306 containerd[1551]: time="2024-07-02T08:35:30.981268940Z" level=info msg="StopPodSandbox for \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\"" Jul 2 08:35:30.981376 containerd[1551]: time="2024-07-02T08:35:30.981329340Z" level=info msg="Container to stop \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:35:30.981376 containerd[1551]: time="2024-07-02T08:35:30.981367300Z" level=info msg="Container to stop \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:35:30.981431 containerd[1551]: time="2024-07-02T08:35:30.981377660Z" level=info msg="Container to stop \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:35:30.981431 containerd[1551]: time="2024-07-02T08:35:30.981404299Z" level=info msg="Container to stop \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:35:30.981431 containerd[1551]: time="2024-07-02T08:35:30.981415299Z" level=info msg="Container to stop \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:35:30.998334 containerd[1551]: time="2024-07-02T08:35:30.998126283Z" level=info msg="shim disconnected" id=d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60 namespace=k8s.io Jul 2 08:35:30.998334 containerd[1551]: time="2024-07-02T08:35:30.998177082Z" level=warning msg="cleaning up after shim disconnected" id=d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60 namespace=k8s.io Jul 2 08:35:30.998334 containerd[1551]: time="2024-07-02T08:35:30.998185242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:35:31.017016 containerd[1551]: time="2024-07-02T08:35:31.016966188Z" level=info msg="TearDown network for sandbox \"d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60\" successfully" Jul 2 08:35:31.017016 containerd[1551]: time="2024-07-02T08:35:31.017005588Z" level=info msg="StopPodSandbox for \"d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60\" returns successfully" Jul 2 08:35:31.021973 containerd[1551]: time="2024-07-02T08:35:31.021918674Z" level=info msg="shim disconnected" id=1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f namespace=k8s.io Jul 2 08:35:31.022324 containerd[1551]: time="2024-07-02T08:35:31.022008553Z" level=warning msg="cleaning up after shim disconnected" id=1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f namespace=k8s.io Jul 2 08:35:31.022324 containerd[1551]: time="2024-07-02T08:35:31.022021273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:35:31.039663 containerd[1551]: time="2024-07-02T08:35:31.039618910Z" level=info msg="TearDown network for sandbox \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" successfully" Jul 2 08:35:31.039908 containerd[1551]: time="2024-07-02T08:35:31.039815509Z" level=info msg="StopPodSandbox for \"1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f\" returns successfully" Jul 2 08:35:31.055600 kubelet[2660]: I0702 08:35:31.055360 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcxh7\" (UniqueName: \"kubernetes.io/projected/b6272e50-d13e-48cb-9719-4458a0972cd9-kube-api-access-dcxh7\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.055600 kubelet[2660]: I0702 08:35:31.055410 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-run\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.055600 kubelet[2660]: I0702 08:35:31.055429 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-lib-modules\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.055600 kubelet[2660]: I0702 08:35:31.055487 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-bpf-maps\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.055600 kubelet[2660]: I0702 08:35:31.055494 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.055600 kubelet[2660]: I0702 08:35:31.055509 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-hostproc\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056072 kubelet[2660]: I0702 08:35:31.055542 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.056072 kubelet[2660]: I0702 08:35:31.055580 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-config-path\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056072 kubelet[2660]: I0702 08:35:31.055581 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.056072 kubelet[2660]: I0702 08:35:31.055529 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-hostproc" (OuterVolumeSpecName: "hostproc") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.056072 kubelet[2660]: I0702 08:35:31.055602 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-xtables-lock\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056183 kubelet[2660]: I0702 08:35:31.055628 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6272e50-d13e-48cb-9719-4458a0972cd9-hubble-tls\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056183 kubelet[2660]: I0702 08:35:31.055649 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6272e50-d13e-48cb-9719-4458a0972cd9-clustermesh-secrets\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056183 kubelet[2660]: I0702 08:35:31.055666 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cni-path\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056183 kubelet[2660]: I0702 08:35:31.055709 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-cgroup\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056183 kubelet[2660]: I0702 08:35:31.055727 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-host-proc-sys-kernel\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056183 kubelet[2660]: I0702 08:35:31.055747 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp5zl\" (UniqueName: \"kubernetes.io/projected/2bbd4f2d-27ba-4a67-8040-c0cb821c9493-kube-api-access-fp5zl\") pod \"2bbd4f2d-27ba-4a67-8040-c0cb821c9493\" (UID: \"2bbd4f2d-27ba-4a67-8040-c0cb821c9493\") " Jul 2 08:35:31.056321 kubelet[2660]: I0702 08:35:31.055771 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-host-proc-sys-net\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056321 kubelet[2660]: I0702 08:35:31.055789 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-etc-cni-netd\") pod \"b6272e50-d13e-48cb-9719-4458a0972cd9\" (UID: \"b6272e50-d13e-48cb-9719-4458a0972cd9\") " Jul 2 08:35:31.056321 kubelet[2660]: I0702 08:35:31.055811 2660 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bbd4f2d-27ba-4a67-8040-c0cb821c9493-cilium-config-path\") pod \"2bbd4f2d-27ba-4a67-8040-c0cb821c9493\" (UID: \"2bbd4f2d-27ba-4a67-8040-c0cb821c9493\") " Jul 2 08:35:31.056321 kubelet[2660]: I0702 08:35:31.055841 2660 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.056321 kubelet[2660]: I0702 08:35:31.055852 2660 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.056321 kubelet[2660]: I0702 08:35:31.055862 2660 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.056321 kubelet[2660]: I0702 08:35:31.055872 2660 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.058179 kubelet[2660]: I0702 08:35:31.058037 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:35:31.058179 kubelet[2660]: I0702 08:35:31.058144 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.058856 kubelet[2660]: I0702 08:35:31.058202 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.058856 kubelet[2660]: I0702 08:35:31.058252 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cni-path" (OuterVolumeSpecName: "cni-path") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.058856 kubelet[2660]: I0702 08:35:31.058271 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.058856 kubelet[2660]: I0702 08:35:31.058288 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.059162 kubelet[2660]: I0702 08:35:31.059034 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bbd4f2d-27ba-4a67-8040-c0cb821c9493-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2bbd4f2d-27ba-4a67-8040-c0cb821c9493" (UID: "2bbd4f2d-27ba-4a67-8040-c0cb821c9493"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:35:31.059162 kubelet[2660]: I0702 08:35:31.059088 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:35:31.060506 kubelet[2660]: I0702 08:35:31.060448 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6272e50-d13e-48cb-9719-4458a0972cd9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:35:31.061104 kubelet[2660]: I0702 08:35:31.061074 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6272e50-d13e-48cb-9719-4458a0972cd9-kube-api-access-dcxh7" (OuterVolumeSpecName: "kube-api-access-dcxh7") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "kube-api-access-dcxh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:35:31.061182 kubelet[2660]: I0702 08:35:31.061166 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbd4f2d-27ba-4a67-8040-c0cb821c9493-kube-api-access-fp5zl" (OuterVolumeSpecName: "kube-api-access-fp5zl") pod "2bbd4f2d-27ba-4a67-8040-c0cb821c9493" (UID: "2bbd4f2d-27ba-4a67-8040-c0cb821c9493"). InnerVolumeSpecName "kube-api-access-fp5zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:35:31.062696 kubelet[2660]: I0702 08:35:31.062665 2660 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6272e50-d13e-48cb-9719-4458a0972cd9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b6272e50-d13e-48cb-9719-4458a0972cd9" (UID: "b6272e50-d13e-48cb-9719-4458a0972cd9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:35:31.156804 kubelet[2660]: I0702 08:35:31.156690 2660 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.156804 kubelet[2660]: I0702 08:35:31.156738 2660 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.156804 kubelet[2660]: I0702 08:35:31.156749 2660 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6272e50-d13e-48cb-9719-4458a0972cd9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.156804 kubelet[2660]: I0702 08:35:31.156760 2660 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6272e50-d13e-48cb-9719-4458a0972cd9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.156804 kubelet[2660]: I0702 08:35:31.156770 2660 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.156804 kubelet[2660]: I0702 08:35:31.156779 2660 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.156804 kubelet[2660]: I0702 08:35:31.156789 2660 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.156804 kubelet[2660]: I0702 08:35:31.156806 2660 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fp5zl\" (UniqueName: \"kubernetes.io/projected/2bbd4f2d-27ba-4a67-8040-c0cb821c9493-kube-api-access-fp5zl\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.157053 kubelet[2660]: I0702 08:35:31.156818 2660 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.157053 kubelet[2660]: I0702 08:35:31.156828 2660 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6272e50-d13e-48cb-9719-4458a0972cd9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.157053 kubelet[2660]: I0702 08:35:31.156837 2660 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bbd4f2d-27ba-4a67-8040-c0cb821c9493-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.157053 kubelet[2660]: I0702 08:35:31.156847 2660 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dcxh7\" (UniqueName: \"kubernetes.io/projected/b6272e50-d13e-48cb-9719-4458a0972cd9-kube-api-access-dcxh7\") on node \"localhost\" DevicePath \"\"" Jul 2 08:35:31.837655 kubelet[2660]: E0702 08:35:31.837617 2660 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:35:31.881682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d370b870368f126164305dc9232b791461728229d8881c6aae761e0dc0852f60-rootfs.mount: Deactivated successfully. Jul 2 08:35:31.881835 systemd[1]: var-lib-kubelet-pods-2bbd4f2d\x2d27ba\x2d4a67\x2d8040\x2dc0cb821c9493-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfp5zl.mount: Deactivated successfully. Jul 2 08:35:31.881919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f-rootfs.mount: Deactivated successfully. Jul 2 08:35:31.882003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e544c95fa4e2e359c93ac29bb0ee84ad8fc7d78f5c70bc9f750e68c6898c85f-shm.mount: Deactivated successfully. Jul 2 08:35:31.882076 systemd[1]: var-lib-kubelet-pods-b6272e50\x2dd13e\x2d48cb\x2d9719\x2d4458a0972cd9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddcxh7.mount: Deactivated successfully. Jul 2 08:35:31.882149 systemd[1]: var-lib-kubelet-pods-b6272e50\x2dd13e\x2d48cb\x2d9719\x2d4458a0972cd9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:35:31.882291 systemd[1]: var-lib-kubelet-pods-b6272e50\x2dd13e\x2d48cb\x2d9719\x2d4458a0972cd9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:35:31.939377 kubelet[2660]: I0702 08:35:31.939054 2660 scope.go:117] "RemoveContainer" containerID="67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca" Jul 2 08:35:31.942359 containerd[1551]: time="2024-07-02T08:35:31.942315894Z" level=info msg="RemoveContainer for \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\"" Jul 2 08:35:31.950424 containerd[1551]: time="2024-07-02T08:35:31.950376598Z" level=info msg="RemoveContainer for \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\" returns successfully" Jul 2 08:35:31.950610 kubelet[2660]: I0702 08:35:31.950590 2660 scope.go:117] "RemoveContainer" containerID="67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca" Jul 2 08:35:31.957076 containerd[1551]: time="2024-07-02T08:35:31.950771715Z" level=error msg="ContainerStatus for \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\": not found" Jul 2 08:35:31.957395 kubelet[2660]: E0702 08:35:31.957375 2660 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\": not found" containerID="67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca" Jul 2 08:35:31.957459 kubelet[2660]: I0702 08:35:31.957449 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca"} err="failed to get container status \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"67425f82b6bf6408e7ef67e570c4d8da55f1178f6aa928e85628d64d3d0087ca\": not found" Jul 2 08:35:31.957505 kubelet[2660]: I0702 08:35:31.957464 2660 scope.go:117] "RemoveContainer" containerID="ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d" Jul 2 08:35:31.961642 containerd[1551]: time="2024-07-02T08:35:31.961483081Z" level=info msg="RemoveContainer for \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\"" Jul 2 08:35:31.965158 containerd[1551]: time="2024-07-02T08:35:31.965119255Z" level=info msg="RemoveContainer for \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\" returns successfully" Jul 2 08:35:31.965307 kubelet[2660]: I0702 08:35:31.965274 2660 scope.go:117] "RemoveContainer" containerID="0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60" Jul 2 08:35:31.966092 containerd[1551]: time="2024-07-02T08:35:31.966065489Z" level=info msg="RemoveContainer for \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\"" Jul 2 08:35:31.968357 containerd[1551]: time="2024-07-02T08:35:31.968323313Z" level=info msg="RemoveContainer for \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\" returns successfully" Jul 2 08:35:31.968502 kubelet[2660]: I0702 08:35:31.968472 2660 scope.go:117] "RemoveContainer" containerID="653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37" Jul 2 08:35:31.969279 containerd[1551]: time="2024-07-02T08:35:31.969260706Z" level=info msg="RemoveContainer for \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\"" Jul 2 08:35:31.971298 containerd[1551]: time="2024-07-02T08:35:31.971265492Z" level=info msg="RemoveContainer for \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\" returns successfully" Jul 2 08:35:31.971428 kubelet[2660]: I0702 08:35:31.971400 2660 scope.go:117] "RemoveContainer" containerID="5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd" Jul 2 08:35:31.972183 containerd[1551]: time="2024-07-02T08:35:31.972163966Z" level=info msg="RemoveContainer for \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\"" Jul 2 08:35:31.974496 containerd[1551]: time="2024-07-02T08:35:31.974462390Z" level=info msg="RemoveContainer for \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\" returns successfully" Jul 2 08:35:31.974649 kubelet[2660]: I0702 08:35:31.974625 2660 scope.go:117] "RemoveContainer" containerID="1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9" Jul 2 08:35:31.975438 containerd[1551]: time="2024-07-02T08:35:31.975412824Z" level=info msg="RemoveContainer for \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\"" Jul 2 08:35:31.977657 containerd[1551]: time="2024-07-02T08:35:31.977580408Z" level=info msg="RemoveContainer for \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\" returns successfully" Jul 2 08:35:31.977727 kubelet[2660]: I0702 08:35:31.977705 2660 scope.go:117] "RemoveContainer" containerID="ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d" Jul 2 08:35:31.977862 containerd[1551]: time="2024-07-02T08:35:31.977833967Z" level=error msg="ContainerStatus for \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\": not found" Jul 2 08:35:31.977980 kubelet[2660]: E0702 08:35:31.977963 2660 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\": not found" containerID="ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d" Jul 2 08:35:31.978044 kubelet[2660]: I0702 08:35:31.978032 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d"} err="failed to get container status \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec9a2974338a5be58e8cc2c26fb2f1ecdafca676e6cb0195d90c44069cb7354d\": not found" Jul 2 08:35:31.978082 kubelet[2660]: I0702 08:35:31.978046 2660 scope.go:117] "RemoveContainer" containerID="0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60" Jul 2 08:35:31.978196 containerd[1551]: time="2024-07-02T08:35:31.978160684Z" level=error msg="ContainerStatus for \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\": not found" Jul 2 08:35:31.978301 kubelet[2660]: E0702 08:35:31.978286 2660 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\": not found" containerID="0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60" Jul 2 08:35:31.978339 kubelet[2660]: I0702 08:35:31.978326 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60"} err="failed to get container status \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ba38299728ed9893452d218c4432507aa669529354d53027ca4131857d8ce60\": not found" Jul 2 08:35:31.978339 kubelet[2660]: I0702 08:35:31.978337 2660 scope.go:117] "RemoveContainer" containerID="653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37" Jul 2 08:35:31.978460 containerd[1551]: time="2024-07-02T08:35:31.978437282Z" level=error msg="ContainerStatus for \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\": not found" Jul 2 08:35:31.978544 kubelet[2660]: E0702 08:35:31.978530 2660 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\": not found" containerID="653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37" Jul 2 08:35:31.978597 kubelet[2660]: I0702 08:35:31.978563 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37"} err="failed to get container status \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\": rpc error: code = NotFound desc = an error occurred when try to find container \"653ac7463aac431c383ccf43593ca0f2421aa11bbe094ed252ac2594f5d0ec37\": not found" Jul 2 08:35:31.978597 kubelet[2660]: I0702 08:35:31.978572 2660 scope.go:117] "RemoveContainer" containerID="5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd" Jul 2 08:35:31.978687 containerd[1551]: time="2024-07-02T08:35:31.978665801Z" level=error msg="ContainerStatus for \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\": not found" Jul 2 08:35:31.978765 kubelet[2660]: E0702 08:35:31.978752 2660 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\": not found" containerID="5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd" Jul 2 08:35:31.978816 kubelet[2660]: I0702 08:35:31.978798 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd"} err="failed to get container status \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"5726aedc166873f37ff12de5ed96c18f26e0b3229c13ac316f1eddff380e99cd\": not found" Jul 2 08:35:31.978846 kubelet[2660]: I0702 08:35:31.978819 2660 scope.go:117] "RemoveContainer" containerID="1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9" Jul 2 08:35:31.978931 containerd[1551]: time="2024-07-02T08:35:31.978910959Z" level=error msg="ContainerStatus for \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\": not found" Jul 2 08:35:31.978996 kubelet[2660]: E0702 08:35:31.978984 2660 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\": not found" containerID="1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9" Jul 2 08:35:31.979028 kubelet[2660]: I0702 08:35:31.979006 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9"} err="failed to get container status \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a0bb13e6b2dfc4edb6f8fac0df1149c11fe4d021b907731335665bc6331f5f9\": not found" Jul 2 08:35:32.746635 kubelet[2660]: I0702 08:35:32.746599 2660 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2bbd4f2d-27ba-4a67-8040-c0cb821c9493" path="/var/lib/kubelet/pods/2bbd4f2d-27ba-4a67-8040-c0cb821c9493/volumes" Jul 2 08:35:32.747042 kubelet[2660]: I0702 08:35:32.746982 2660 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b6272e50-d13e-48cb-9719-4458a0972cd9" path="/var/lib/kubelet/pods/b6272e50-d13e-48cb-9719-4458a0972cd9/volumes" Jul 2 08:35:32.826414 sshd[4289]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:32.837771 systemd[1]: Started sshd@22-10.0.0.141:22-10.0.0.1:37516.service - OpenSSH per-connection server daemon (10.0.0.1:37516). Jul 2 08:35:32.838122 systemd[1]: sshd@21-10.0.0.141:22-10.0.0.1:54304.service: Deactivated successfully. Jul 2 08:35:32.840624 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:35:32.841490 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:35:32.845133 systemd-logind[1534]: Removed session 22. Jul 2 08:35:32.874809 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 37516 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:32.875976 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:32.879892 systemd-logind[1534]: New session 23 of user core. Jul 2 08:35:32.889752 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 08:35:34.256444 sshd[4458]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:34.267975 kubelet[2660]: I0702 08:35:34.267284 2660 topology_manager.go:215] "Topology Admit Handler" podUID="2a8705d8-1ed4-4006-bce9-70145e8d90db" podNamespace="kube-system" podName="cilium-vrx74" Jul 2 08:35:34.267975 kubelet[2660]: E0702 08:35:34.267352 2660 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6272e50-d13e-48cb-9719-4458a0972cd9" containerName="apply-sysctl-overwrites" Jul 2 08:35:34.267975 kubelet[2660]: E0702 08:35:34.267362 2660 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bbd4f2d-27ba-4a67-8040-c0cb821c9493" containerName="cilium-operator" Jul 2 08:35:34.267975 kubelet[2660]: E0702 08:35:34.267369 2660 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6272e50-d13e-48cb-9719-4458a0972cd9" containerName="mount-bpf-fs" Jul 2 08:35:34.267975 kubelet[2660]: E0702 08:35:34.267377 2660 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6272e50-d13e-48cb-9719-4458a0972cd9" containerName="mount-cgroup" Jul 2 08:35:34.267975 kubelet[2660]: E0702 08:35:34.267384 2660 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6272e50-d13e-48cb-9719-4458a0972cd9" containerName="clean-cilium-state" Jul 2 08:35:34.267975 kubelet[2660]: E0702 08:35:34.267419 2660 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6272e50-d13e-48cb-9719-4458a0972cd9" containerName="cilium-agent" Jul 2 08:35:34.267975 kubelet[2660]: I0702 08:35:34.267446 2660 memory_manager.go:346] "RemoveStaleState removing state" podUID="b6272e50-d13e-48cb-9719-4458a0972cd9" containerName="cilium-agent" Jul 2 08:35:34.267975 kubelet[2660]: I0702 08:35:34.267614 2660 memory_manager.go:346] "RemoveStaleState removing state" podUID="2bbd4f2d-27ba-4a67-8040-c0cb821c9493" containerName="cilium-operator" Jul 2 08:35:34.268381 systemd[1]: Started sshd@23-10.0.0.141:22-10.0.0.1:37518.service - OpenSSH per-connection server daemon (10.0.0.1:37518). Jul 2 08:35:34.269934 systemd[1]: sshd@22-10.0.0.141:22-10.0.0.1:37516.service: Deactivated successfully. Jul 2 08:35:34.272847 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:35:34.293467 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:35:34.297003 systemd-logind[1534]: Removed session 23. Jul 2 08:35:34.326859 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 37518 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:34.328065 sshd[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:34.331902 systemd-logind[1534]: New session 24 of user core. Jul 2 08:35:34.341762 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 08:35:34.372003 kubelet[2660]: I0702 08:35:34.371967 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a8705d8-1ed4-4006-bce9-70145e8d90db-hubble-tls\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372088 kubelet[2660]: I0702 08:35:34.372018 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-cilium-cgroup\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372088 kubelet[2660]: I0702 08:35:34.372062 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-lib-modules\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372141 kubelet[2660]: I0702 08:35:34.372126 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-hostproc\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372234 kubelet[2660]: I0702 08:35:34.372203 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a8705d8-1ed4-4006-bce9-70145e8d90db-cilium-config-path\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372267 kubelet[2660]: I0702 08:35:34.372241 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-host-proc-sys-kernel\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372267 kubelet[2660]: I0702 08:35:34.372263 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-xtables-lock\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372317 kubelet[2660]: I0702 08:35:34.372282 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-bpf-maps\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372347 kubelet[2660]: I0702 08:35:34.372335 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-host-proc-sys-net\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372382 kubelet[2660]: I0702 08:35:34.372358 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhttj\" (UniqueName: \"kubernetes.io/projected/2a8705d8-1ed4-4006-bce9-70145e8d90db-kube-api-access-qhttj\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372382 kubelet[2660]: I0702 08:35:34.372380 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a8705d8-1ed4-4006-bce9-70145e8d90db-clustermesh-secrets\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372424 kubelet[2660]: I0702 08:35:34.372401 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-cilium-run\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372424 kubelet[2660]: I0702 08:35:34.372420 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-etc-cni-netd\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372482 kubelet[2660]: I0702 08:35:34.372462 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a8705d8-1ed4-4006-bce9-70145e8d90db-cilium-ipsec-secrets\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.372511 kubelet[2660]: I0702 08:35:34.372501 2660 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a8705d8-1ed4-4006-bce9-70145e8d90db-cni-path\") pod \"cilium-vrx74\" (UID: \"2a8705d8-1ed4-4006-bce9-70145e8d90db\") " pod="kube-system/cilium-vrx74" Jul 2 08:35:34.390655 sshd[4472]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:34.397759 systemd[1]: Started sshd@24-10.0.0.141:22-10.0.0.1:37534.service - OpenSSH per-connection server daemon (10.0.0.1:37534). Jul 2 08:35:34.398107 systemd[1]: sshd@23-10.0.0.141:22-10.0.0.1:37518.service: Deactivated successfully. Jul 2 08:35:34.400939 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:35:34.401232 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:35:34.402325 systemd-logind[1534]: Removed session 24. Jul 2 08:35:34.428618 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 37534 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:35:34.429749 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:35:34.433091 systemd-logind[1534]: New session 25 of user core. Jul 2 08:35:34.440820 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 08:35:34.588702 kubelet[2660]: E0702 08:35:34.588174 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:34.589148 containerd[1551]: time="2024-07-02T08:35:34.589108434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrx74,Uid:2a8705d8-1ed4-4006-bce9-70145e8d90db,Namespace:kube-system,Attempt:0,}" Jul 2 08:35:34.610402 containerd[1551]: time="2024-07-02T08:35:34.610326038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:35:34.610402 containerd[1551]: time="2024-07-02T08:35:34.610381798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:35:34.610402 containerd[1551]: time="2024-07-02T08:35:34.610400718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:35:34.610590 containerd[1551]: time="2024-07-02T08:35:34.610416517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:35:34.637671 containerd[1551]: time="2024-07-02T08:35:34.637633420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrx74,Uid:2a8705d8-1ed4-4006-bce9-70145e8d90db,Namespace:kube-system,Attempt:0,} returns sandbox id \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\"" Jul 2 08:35:34.638330 kubelet[2660]: E0702 08:35:34.638303 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:34.640096 containerd[1551]: time="2024-07-02T08:35:34.640064372Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:35:34.650013 containerd[1551]: time="2024-07-02T08:35:34.649922656Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b49acf25ebf3ee4b74d79c47c7cffd13f53f8f1d28c2db54fa080f4193c9ddc\"" Jul 2 08:35:34.651085 containerd[1551]: time="2024-07-02T08:35:34.651054212Z" level=info msg="StartContainer for \"6b49acf25ebf3ee4b74d79c47c7cffd13f53f8f1d28c2db54fa080f4193c9ddc\"" Jul 2 08:35:34.688437 containerd[1551]: time="2024-07-02T08:35:34.688383479Z" level=info msg="StartContainer for \"6b49acf25ebf3ee4b74d79c47c7cffd13f53f8f1d28c2db54fa080f4193c9ddc\" returns successfully" Jul 2 08:35:34.725480 containerd[1551]: time="2024-07-02T08:35:34.725400667Z" level=info msg="shim disconnected" id=6b49acf25ebf3ee4b74d79c47c7cffd13f53f8f1d28c2db54fa080f4193c9ddc namespace=k8s.io Jul 2 08:35:34.725480 containerd[1551]: time="2024-07-02T08:35:34.725469067Z" level=warning msg="cleaning up after shim disconnected" id=6b49acf25ebf3ee4b74d79c47c7cffd13f53f8f1d28c2db54fa080f4193c9ddc namespace=k8s.io Jul 2 08:35:34.725480 containerd[1551]: time="2024-07-02T08:35:34.725478547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:35:34.951312 kubelet[2660]: E0702 08:35:34.951199 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:34.953871 containerd[1551]: time="2024-07-02T08:35:34.953207813Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:35:34.961550 containerd[1551]: time="2024-07-02T08:35:34.961499624Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd878ddef2a4d2fa1c0ac7df4b135f8a0ce46f73cf2f5b407bbfe113037f3975\"" Jul 2 08:35:34.962734 containerd[1551]: time="2024-07-02T08:35:34.962689500Z" level=info msg="StartContainer for \"cd878ddef2a4d2fa1c0ac7df4b135f8a0ce46f73cf2f5b407bbfe113037f3975\"" Jul 2 08:35:35.006578 containerd[1551]: time="2024-07-02T08:35:35.006509269Z" level=info msg="StartContainer for \"cd878ddef2a4d2fa1c0ac7df4b135f8a0ce46f73cf2f5b407bbfe113037f3975\" returns successfully" Jul 2 08:35:35.031887 containerd[1551]: time="2024-07-02T08:35:35.031741366Z" level=info msg="shim disconnected" id=cd878ddef2a4d2fa1c0ac7df4b135f8a0ce46f73cf2f5b407bbfe113037f3975 namespace=k8s.io Jul 2 08:35:35.031887 containerd[1551]: time="2024-07-02T08:35:35.031790566Z" level=warning msg="cleaning up after shim disconnected" id=cd878ddef2a4d2fa1c0ac7df4b135f8a0ce46f73cf2f5b407bbfe113037f3975 namespace=k8s.io Jul 2 08:35:35.031887 containerd[1551]: time="2024-07-02T08:35:35.031801646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:35:35.954817 kubelet[2660]: E0702 08:35:35.954670 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:35.956109 containerd[1551]: time="2024-07-02T08:35:35.956075569Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:35:35.974916 containerd[1551]: time="2024-07-02T08:35:35.974868642Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"58dfea6721896305aca0a3c199d9899f32973874f4b281f1bf1b0405b24959c4\"" Jul 2 08:35:35.975368 containerd[1551]: time="2024-07-02T08:35:35.975293441Z" level=info msg="StartContainer for \"58dfea6721896305aca0a3c199d9899f32973874f4b281f1bf1b0405b24959c4\"" Jul 2 08:35:36.022843 containerd[1551]: time="2024-07-02T08:35:36.022520745Z" level=info msg="StartContainer for \"58dfea6721896305aca0a3c199d9899f32973874f4b281f1bf1b0405b24959c4\" returns successfully" Jul 2 08:35:36.045035 containerd[1551]: time="2024-07-02T08:35:36.044979632Z" level=info msg="shim disconnected" id=58dfea6721896305aca0a3c199d9899f32973874f4b281f1bf1b0405b24959c4 namespace=k8s.io Jul 2 08:35:36.045035 containerd[1551]: time="2024-07-02T08:35:36.045034392Z" level=warning msg="cleaning up after shim disconnected" id=58dfea6721896305aca0a3c199d9899f32973874f4b281f1bf1b0405b24959c4 namespace=k8s.io Jul 2 08:35:36.045228 containerd[1551]: time="2024-07-02T08:35:36.045043312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:35:36.053980 containerd[1551]: time="2024-07-02T08:35:36.053937579Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:35:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 08:35:36.478421 systemd[1]: run-containerd-runc-k8s.io-58dfea6721896305aca0a3c199d9899f32973874f4b281f1bf1b0405b24959c4-runc.TEVpSy.mount: Deactivated successfully. Jul 2 08:35:36.478580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58dfea6721896305aca0a3c199d9899f32973874f4b281f1bf1b0405b24959c4-rootfs.mount: Deactivated successfully. Jul 2 08:35:36.839810 kubelet[2660]: E0702 08:35:36.839747 2660 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:35:36.960289 kubelet[2660]: E0702 08:35:36.960259 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:36.962672 containerd[1551]: time="2024-07-02T08:35:36.962535238Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:35:36.982435 containerd[1551]: time="2024-07-02T08:35:36.982384689Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cd047aeac925e82fd062e0393ac3cf485e77a227c6a393bd155e225897f17267\"" Jul 2 08:35:36.984933 containerd[1551]: time="2024-07-02T08:35:36.983732247Z" level=info msg="StartContainer for \"cd047aeac925e82fd062e0393ac3cf485e77a227c6a393bd155e225897f17267\"" Jul 2 08:35:37.036781 containerd[1551]: time="2024-07-02T08:35:37.036650725Z" level=info msg="StartContainer for \"cd047aeac925e82fd062e0393ac3cf485e77a227c6a393bd155e225897f17267\" returns successfully" Jul 2 08:35:37.052681 containerd[1551]: time="2024-07-02T08:35:37.052607757Z" level=info msg="shim disconnected" id=cd047aeac925e82fd062e0393ac3cf485e77a227c6a393bd155e225897f17267 namespace=k8s.io Jul 2 08:35:37.052864 containerd[1551]: time="2024-07-02T08:35:37.052678477Z" level=warning msg="cleaning up after shim disconnected" id=cd047aeac925e82fd062e0393ac3cf485e77a227c6a393bd155e225897f17267 namespace=k8s.io Jul 2 08:35:37.052864 containerd[1551]: time="2024-07-02T08:35:37.052718357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:35:37.478479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd047aeac925e82fd062e0393ac3cf485e77a227c6a393bd155e225897f17267-rootfs.mount: Deactivated successfully. Jul 2 08:35:37.778627 kubelet[2660]: I0702 08:35:37.778525 2660 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:35:37Z","lastTransitionTime":"2024-07-02T08:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:35:37.964990 kubelet[2660]: E0702 08:35:37.964824 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:37.970123 containerd[1551]: time="2024-07-02T08:35:37.970069720Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:35:37.983839 containerd[1551]: time="2024-07-02T08:35:37.983742674Z" level=info msg="CreateContainer within sandbox \"add3adb73a0aaea35a6766a18a44d10ba9105690058350b8802a02fac4cb0af8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71a87270dd0e0b936421abaac3a6d89ad68aa0fa0abbcb3b34073f1a58f26dbc\"" Jul 2 08:35:37.984299 containerd[1551]: time="2024-07-02T08:35:37.984266033Z" level=info msg="StartContainer for \"71a87270dd0e0b936421abaac3a6d89ad68aa0fa0abbcb3b34073f1a58f26dbc\"" Jul 2 08:35:38.030777 containerd[1551]: time="2024-07-02T08:35:38.030730760Z" level=info msg="StartContainer for \"71a87270dd0e0b936421abaac3a6d89ad68aa0fa0abbcb3b34073f1a58f26dbc\" returns successfully" Jul 2 08:35:38.288745 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 08:35:38.744811 kubelet[2660]: E0702 08:35:38.744782 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:38.968424 kubelet[2660]: E0702 08:35:38.968393 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:38.983526 kubelet[2660]: I0702 08:35:38.983494 2660 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vrx74" podStartSLOduration=4.983446228 podCreationTimestamp="2024-07-02 08:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:35:38.983369148 +0000 UTC m=+82.324136179" watchObservedRunningTime="2024-07-02 08:35:38.983446228 +0000 UTC m=+82.324213259" Jul 2 08:35:40.589814 kubelet[2660]: E0702 08:35:40.589746 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:41.032998 systemd-networkd[1234]: lxc_health: Link UP Jul 2 08:35:41.038765 systemd-networkd[1234]: lxc_health: Gained carrier Jul 2 08:35:41.745069 kubelet[2660]: E0702 08:35:41.745026 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:42.248677 systemd-networkd[1234]: lxc_health: Gained IPv6LL Jul 2 08:35:42.590770 kubelet[2660]: E0702 08:35:42.590720 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:42.976888 kubelet[2660]: E0702 08:35:42.976539 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:43.978234 kubelet[2660]: E0702 08:35:43.977982 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:35:49.243850 sshd[4481]: pam_unix(sshd:session): session closed for user core Jul 2 08:35:49.247487 systemd[1]: sshd@24-10.0.0.141:22-10.0.0.1:37534.service: Deactivated successfully. Jul 2 08:35:49.250071 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:35:49.250245 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:35:49.251728 systemd-logind[1534]: Removed session 25. Jul 2 08:35:49.744966 kubelet[2660]: E0702 08:35:49.744929 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"