Jul 2 08:27:40.905213 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 08:27:40.905235 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:27:40.905246 kernel: KASLR enabled Jul 2 08:27:40.905251 kernel: efi: EFI v2.7 by EDK II Jul 2 08:27:40.905257 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 08:27:40.905263 kernel: random: crng init done Jul 2 08:27:40.905270 kernel: ACPI: Early table checksum verification disabled Jul 2 08:27:40.905276 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 08:27:40.905282 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 08:27:40.905290 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905296 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905302 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905308 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905314 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905321 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905329 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905335 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905341 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:27:40.905348 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 08:27:40.905354 kernel: NUMA: Failed to initialise from firmware Jul 2 08:27:40.905360 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:27:40.905367 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 2 08:27:40.905373 kernel: Zone ranges: Jul 2 08:27:40.905379 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:27:40.905385 kernel: DMA32 empty Jul 2 08:27:40.905393 kernel: Normal empty Jul 2 08:27:40.905399 kernel: Movable zone start for each node Jul 2 08:27:40.905405 kernel: Early memory node ranges Jul 2 08:27:40.905411 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 08:27:40.905418 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 08:27:40.905424 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 08:27:40.905430 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 08:27:40.905437 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 08:27:40.905443 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 08:27:40.905449 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 08:27:40.905456 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:27:40.905462 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 08:27:40.905470 kernel: psci: probing for conduit method from ACPI. Jul 2 08:27:40.905476 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 08:27:40.905482 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:27:40.905491 kernel: psci: Trusted OS migration not required Jul 2 08:27:40.905498 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:27:40.905505 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 08:27:40.905513 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:27:40.905520 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:27:40.905526 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 08:27:40.905533 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:27:40.905540 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:27:40.905546 kernel: CPU features: detected: Hardware dirty bit management Jul 2 08:27:40.905553 kernel: CPU features: detected: Spectre-v4 Jul 2 08:27:40.905560 kernel: CPU features: detected: Spectre-BHB Jul 2 08:27:40.905566 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 08:27:40.905573 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 08:27:40.905581 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 08:27:40.905588 kernel: alternatives: applying boot alternatives Jul 2 08:27:40.905595 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:27:40.905603 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:27:40.905609 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:27:40.905616 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:27:40.905623 kernel: Fallback order for Node 0: 0 Jul 2 08:27:40.905630 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 08:27:40.905650 kernel: Policy zone: DMA Jul 2 08:27:40.905656 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:27:40.905663 kernel: software IO TLB: area num 4. Jul 2 08:27:40.905671 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 08:27:40.905678 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jul 2 08:27:40.905685 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 08:27:40.905692 kernel: trace event string verifier disabled Jul 2 08:27:40.905698 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:27:40.905705 kernel: rcu: RCU event tracing is enabled. Jul 2 08:27:40.905712 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 08:27:40.905719 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:27:40.905726 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:27:40.905733 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:27:40.905740 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 08:27:40.905747 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:27:40.905755 kernel: GICv3: 256 SPIs implemented Jul 2 08:27:40.905761 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:27:40.905768 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:27:40.905775 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 08:27:40.905781 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 08:27:40.905788 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 08:27:40.905795 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:27:40.905802 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:27:40.905808 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 08:27:40.905815 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 08:27:40.905822 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:27:40.905830 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:27:40.905837 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 08:27:40.905844 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 08:27:40.905851 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 08:27:40.905857 kernel: arm-pv: using stolen time PV Jul 2 08:27:40.905864 kernel: Console: colour dummy device 80x25 Jul 2 08:27:40.905871 kernel: ACPI: Core revision 20230628 Jul 2 08:27:40.905878 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 08:27:40.905885 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:27:40.905892 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:27:40.905900 kernel: SELinux: Initializing. Jul 2 08:27:40.905907 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:27:40.905914 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:27:40.905921 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:27:40.905928 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:27:40.905935 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:27:40.905942 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:27:40.905988 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 08:27:40.905997 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 08:27:40.906006 kernel: Remapping and enabling EFI services. Jul 2 08:27:40.906013 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:27:40.906020 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:27:40.906027 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 08:27:40.906034 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 08:27:40.906040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:27:40.906047 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 08:27:40.906054 kernel: Detected PIPT I-cache on CPU2 Jul 2 08:27:40.906061 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 08:27:40.906068 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 08:27:40.906077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:27:40.906084 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 08:27:40.906096 kernel: Detected PIPT I-cache on CPU3 Jul 2 08:27:40.906105 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 08:27:40.906112 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 08:27:40.906120 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:27:40.906127 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 08:27:40.906134 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 08:27:40.906141 kernel: SMP: Total of 4 processors activated. Jul 2 08:27:40.906150 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:27:40.906172 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 08:27:40.906181 kernel: CPU features: detected: Common not Private translations Jul 2 08:27:40.906189 kernel: CPU features: detected: CRC32 instructions Jul 2 08:27:40.906196 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 08:27:40.906204 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 08:27:40.906211 kernel: CPU features: detected: LSE atomic instructions Jul 2 08:27:40.906219 kernel: CPU features: detected: Privileged Access Never Jul 2 08:27:40.906229 kernel: CPU features: detected: RAS Extension Support Jul 2 08:27:40.906237 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 08:27:40.906244 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:27:40.906254 kernel: alternatives: applying system-wide alternatives Jul 2 08:27:40.906261 kernel: devtmpfs: initialized Jul 2 08:27:40.906269 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:27:40.906276 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 08:27:40.906283 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:27:40.906291 kernel: SMBIOS 3.0.0 present. Jul 2 08:27:40.906300 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 08:27:40.906307 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:27:40.906314 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:27:40.906321 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:27:40.906329 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:27:40.906336 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:27:40.906344 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Jul 2 08:27:40.906351 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:27:40.906358 kernel: cpuidle: using governor menu Jul 2 08:27:40.906367 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:27:40.906375 kernel: ASID allocator initialised with 32768 entries Jul 2 08:27:40.906382 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:27:40.906392 kernel: Serial: AMBA PL011 UART driver Jul 2 08:27:40.906399 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 08:27:40.906407 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 08:27:40.906416 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:27:40.906425 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:27:40.906435 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:27:40.906444 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:27:40.906451 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:27:40.906459 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:27:40.906466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:27:40.906474 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:27:40.906481 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:27:40.906490 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:27:40.906497 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:27:40.906504 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:27:40.906513 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:27:40.906521 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:27:40.906529 kernel: ACPI: Interpreter enabled Jul 2 08:27:40.906540 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:27:40.906548 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:27:40.906560 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 08:27:40.906567 kernel: printk: console [ttyAMA0] enabled Jul 2 08:27:40.906574 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 08:27:40.906723 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:27:40.906804 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:27:40.906873 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:27:40.906939 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 08:27:40.907005 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 08:27:40.907015 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 08:27:40.907023 kernel: PCI host bridge to bus 0000:00 Jul 2 08:27:40.907094 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 08:27:40.907274 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:27:40.907358 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 08:27:40.907420 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 08:27:40.907501 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 08:27:40.907579 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 08:27:40.907648 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 08:27:40.907722 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 08:27:40.907790 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 08:27:40.907859 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 08:27:40.907929 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 08:27:40.907996 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 08:27:40.908057 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 08:27:40.908117 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:27:40.908203 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 08:27:40.908215 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:27:40.908223 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:27:40.908230 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:27:40.908238 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:27:40.908246 kernel: iommu: Default domain type: Translated Jul 2 08:27:40.908253 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:27:40.908261 kernel: efivars: Registered efivars operations Jul 2 08:27:40.908268 kernel: vgaarb: loaded Jul 2 08:27:40.908278 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:27:40.908286 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:27:40.908293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:27:40.908301 kernel: pnp: PnP ACPI init Jul 2 08:27:40.908379 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 08:27:40.908390 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:27:40.908397 kernel: NET: Registered PF_INET protocol family Jul 2 08:27:40.908405 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:27:40.908415 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:27:40.908423 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:27:40.908430 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:27:40.908438 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:27:40.908445 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:27:40.908486 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:27:40.908495 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:27:40.908502 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:27:40.908509 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:27:40.908519 kernel: kvm [1]: HYP mode not available Jul 2 08:27:40.908526 kernel: Initialise system trusted keyrings Jul 2 08:27:40.908534 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:27:40.908541 kernel: Key type asymmetric registered Jul 2 08:27:40.908548 kernel: Asymmetric key parser 'x509' registered Jul 2 08:27:40.908555 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:27:40.908562 kernel: io scheduler mq-deadline registered Jul 2 08:27:40.908570 kernel: io scheduler kyber registered Jul 2 08:27:40.908577 kernel: io scheduler bfq registered Jul 2 08:27:40.908586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:27:40.908593 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:27:40.908601 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:27:40.908681 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 08:27:40.908691 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:27:40.908699 kernel: thunder_xcv, ver 1.0 Jul 2 08:27:40.908706 kernel: thunder_bgx, ver 1.0 Jul 2 08:27:40.908713 kernel: nicpf, ver 1.0 Jul 2 08:27:40.908720 kernel: nicvf, ver 1.0 Jul 2 08:27:40.908801 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:27:40.908863 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:27:40 UTC (1719908860) Jul 2 08:27:40.908873 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:27:40.908880 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 08:27:40.908888 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:27:40.908895 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:27:40.908902 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:27:40.908909 kernel: Segment Routing with IPv6 Jul 2 08:27:40.908919 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:27:40.908926 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:27:40.908934 kernel: Key type dns_resolver registered Jul 2 08:27:40.908941 kernel: registered taskstats version 1 Jul 2 08:27:40.908948 kernel: Loading compiled-in X.509 certificates Jul 2 08:27:40.908955 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:27:40.908963 kernel: Key type .fscrypt registered Jul 2 08:27:40.908970 kernel: Key type fscrypt-provisioning registered Jul 2 08:27:40.908978 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:27:40.908986 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:27:40.908993 kernel: ima: No architecture policies found Jul 2 08:27:40.909001 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:27:40.909008 kernel: clk: Disabling unused clocks Jul 2 08:27:40.909015 kernel: Freeing unused kernel memory: 39040K Jul 2 08:27:40.909022 kernel: Run /init as init process Jul 2 08:27:40.909030 kernel: with arguments: Jul 2 08:27:40.909037 kernel: /init Jul 2 08:27:40.909044 kernel: with environment: Jul 2 08:27:40.909052 kernel: HOME=/ Jul 2 08:27:40.909059 kernel: TERM=linux Jul 2 08:27:40.909066 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:27:40.909075 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:27:40.909085 systemd[1]: Detected virtualization kvm. Jul 2 08:27:40.909093 systemd[1]: Detected architecture arm64. Jul 2 08:27:40.909100 systemd[1]: Running in initrd. Jul 2 08:27:40.909109 systemd[1]: No hostname configured, using default hostname. Jul 2 08:27:40.909117 systemd[1]: Hostname set to . Jul 2 08:27:40.909125 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:27:40.909132 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:27:40.909140 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:27:40.909148 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:27:40.909171 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:27:40.909182 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:27:40.909193 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:27:40.909201 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:27:40.909210 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:27:40.909219 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:27:40.909227 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:27:40.909234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:27:40.909243 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:27:40.909252 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:27:40.909259 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:27:40.909267 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:27:40.909275 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:27:40.909283 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:27:40.909291 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:27:40.909299 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:27:40.909306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:27:40.909314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:27:40.909324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:27:40.909332 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:27:40.909340 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:27:40.909348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:27:40.909356 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:27:40.909364 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:27:40.909371 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:27:40.909379 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:27:40.909389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:27:40.909397 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:27:40.909405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:27:40.909412 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:27:40.909421 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:27:40.909430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:27:40.909439 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:27:40.909463 systemd-journald[237]: Collecting audit messages is disabled. Jul 2 08:27:40.909484 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:27:40.909493 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:27:40.909501 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:27:40.909510 systemd-journald[237]: Journal started Jul 2 08:27:40.909528 systemd-journald[237]: Runtime Journal (/run/log/journal/4cb372141474450a863589f09feb0383) is 5.9M, max 47.3M, 41.4M free. Jul 2 08:27:40.891198 systemd-modules-load[238]: Inserted module 'overlay' Jul 2 08:27:40.913242 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:27:40.913416 kernel: Bridge firewalling registered Jul 2 08:27:40.913296 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 2 08:27:40.914357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:27:40.923313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:27:40.924561 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:27:40.925932 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:27:40.932190 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:27:40.934418 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:27:40.935288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:27:40.938023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:27:40.941480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:27:40.952279 dracut-cmdline[275]: dracut-dracut-053 Jul 2 08:27:40.954808 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:27:40.971577 systemd-resolved[277]: Positive Trust Anchors: Jul 2 08:27:40.971594 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:27:40.971624 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:27:40.976144 systemd-resolved[277]: Defaulting to hostname 'linux'. Jul 2 08:27:40.977576 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:27:40.980688 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:27:41.033209 kernel: SCSI subsystem initialized Jul 2 08:27:41.037179 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:27:41.045192 kernel: iscsi: registered transport (tcp) Jul 2 08:27:41.057403 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:27:41.057430 kernel: QLogic iSCSI HBA Driver Jul 2 08:27:41.100628 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:27:41.110323 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:27:41.129556 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:27:41.129613 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:27:41.130366 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:27:41.178195 kernel: raid6: neonx8 gen() 14813 MB/s Jul 2 08:27:41.195209 kernel: raid6: neonx4 gen() 14085 MB/s Jul 2 08:27:41.212202 kernel: raid6: neonx2 gen() 12062 MB/s Jul 2 08:27:41.229201 kernel: raid6: neonx1 gen() 9912 MB/s Jul 2 08:27:41.246188 kernel: raid6: int64x8 gen() 6807 MB/s Jul 2 08:27:41.263193 kernel: raid6: int64x4 gen() 7354 MB/s Jul 2 08:27:41.280182 kernel: raid6: int64x2 gen() 6128 MB/s Jul 2 08:27:41.297187 kernel: raid6: int64x1 gen() 5058 MB/s Jul 2 08:27:41.297211 kernel: raid6: using algorithm neonx8 gen() 14813 MB/s Jul 2 08:27:41.314201 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 2 08:27:41.314227 kernel: raid6: using neon recovery algorithm Jul 2 08:27:41.322184 kernel: xor: measuring software checksum speed Jul 2 08:27:41.323453 kernel: 8regs : 19859 MB/sec Jul 2 08:27:41.323466 kernel: 32regs : 19706 MB/sec Jul 2 08:27:41.324632 kernel: arm64_neon : 27297 MB/sec Jul 2 08:27:41.324645 kernel: xor: using function: arm64_neon (27297 MB/sec) Jul 2 08:27:41.380209 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:27:41.394246 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:27:41.407356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:27:41.418475 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 2 08:27:41.421547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:27:41.430354 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:27:41.441500 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 2 08:27:41.467017 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:27:41.476322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:27:41.517871 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:27:41.525334 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:27:41.538346 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:27:41.539730 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:27:41.541626 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:27:41.543461 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:27:41.549352 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:27:41.559211 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:27:41.574601 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 08:27:41.581278 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 08:27:41.581387 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:27:41.581398 kernel: GPT:9289727 != 19775487 Jul 2 08:27:41.581408 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:27:41.581417 kernel: GPT:9289727 != 19775487 Jul 2 08:27:41.581426 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:27:41.581438 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:27:41.576463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:27:41.576574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:27:41.578633 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:27:41.580487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:27:41.580631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:27:41.582209 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:27:41.593387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:27:41.603222 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522) Jul 2 08:27:41.606430 kernel: BTRFS: device fsid 9b0eb482-485a-4aff-8de4-e09ff146eadf devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (509) Jul 2 08:27:41.606352 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 08:27:41.609995 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:27:41.614749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 08:27:41.624078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 08:27:41.627644 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 08:27:41.628681 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 08:27:41.642364 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:27:41.643851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:27:41.652193 disk-uuid[552]: Primary Header is updated. Jul 2 08:27:41.652193 disk-uuid[552]: Secondary Entries is updated. Jul 2 08:27:41.652193 disk-uuid[552]: Secondary Header is updated. Jul 2 08:27:41.659182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:27:41.665221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:27:42.669193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:27:42.669250 disk-uuid[553]: The operation has completed successfully. Jul 2 08:27:42.691311 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:27:42.691418 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:27:42.709308 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:27:42.712448 sh[575]: Success Jul 2 08:27:42.734624 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:27:42.773615 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:27:42.775080 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:27:42.775892 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:27:42.785775 kernel: BTRFS info (device dm-0): first mount of filesystem 9b0eb482-485a-4aff-8de4-e09ff146eadf Jul 2 08:27:42.785826 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:27:42.785847 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:27:42.787564 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:27:42.787584 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:27:42.790611 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:27:42.791675 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:27:42.801302 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:27:42.802584 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:27:42.808853 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:27:42.808889 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:27:42.809394 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:27:42.811183 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:27:42.817728 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:27:42.819287 kernel: BTRFS info (device vda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:27:42.823754 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:27:42.831314 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:27:42.898275 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:27:42.906342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:27:42.933762 systemd-networkd[768]: lo: Link UP Jul 2 08:27:42.933773 systemd-networkd[768]: lo: Gained carrier Jul 2 08:27:42.936027 systemd-networkd[768]: Enumeration completed Jul 2 08:27:42.936151 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:27:42.936540 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:27:42.936544 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:27:42.940378 ignition[663]: Ignition 2.18.0 Jul 2 08:27:42.937286 systemd-networkd[768]: eth0: Link UP Jul 2 08:27:42.940384 ignition[663]: Stage: fetch-offline Jul 2 08:27:42.937290 systemd-networkd[768]: eth0: Gained carrier Jul 2 08:27:42.940415 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:27:42.937296 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:27:42.940422 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:27:42.937674 systemd[1]: Reached target network.target - Network. Jul 2 08:27:42.940502 ignition[663]: parsed url from cmdline: "" Jul 2 08:27:42.952205 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 08:27:42.940505 ignition[663]: no config URL provided Jul 2 08:27:42.940509 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:27:42.940516 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:27:42.940545 ignition[663]: op(1): [started] loading QEMU firmware config module Jul 2 08:27:42.940549 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 08:27:42.954476 ignition[663]: op(1): [finished] loading QEMU firmware config module Jul 2 08:27:42.954495 ignition[663]: QEMU firmware config was not found. Ignoring... Jul 2 08:27:42.994071 ignition[663]: parsing config with SHA512: cb77f95c04f5e759fc90c076ce8de3873e8e0c7dce88564ce4f4c33004e46d9c18b205d881b9b470a15c334c7ca815b0782232f99e9a75665da09f376bde50a3 Jul 2 08:27:42.998404 unknown[663]: fetched base config from "system" Jul 2 08:27:42.998416 unknown[663]: fetched user config from "qemu" Jul 2 08:27:42.998857 ignition[663]: fetch-offline: fetch-offline passed Jul 2 08:27:42.998913 ignition[663]: Ignition finished successfully Jul 2 08:27:43.000750 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:27:43.002051 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 08:27:43.007340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:27:43.018045 ignition[773]: Ignition 2.18.0 Jul 2 08:27:43.018054 ignition[773]: Stage: kargs Jul 2 08:27:43.018249 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:27:43.018259 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:27:43.021655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:27:43.019119 ignition[773]: kargs: kargs passed Jul 2 08:27:43.019203 ignition[773]: Ignition finished successfully Jul 2 08:27:43.029326 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:27:43.039886 ignition[782]: Ignition 2.18.0 Jul 2 08:27:43.039895 ignition[782]: Stage: disks Jul 2 08:27:43.040038 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:27:43.042840 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:27:43.040047 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:27:43.043962 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:27:43.040916 ignition[782]: disks: disks passed Jul 2 08:27:43.045436 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:27:43.040959 ignition[782]: Ignition finished successfully Jul 2 08:27:43.047081 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:27:43.048934 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:27:43.050218 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:27:43.057311 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:27:43.068573 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:27:43.073011 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:27:43.079285 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:27:43.124196 kernel: EXT4-fs (vda9): mounted filesystem 9aacfbff-cef8-4758-afb5-6310e7c6c5e6 r/w with ordered data mode. Quota mode: none. Jul 2 08:27:43.124258 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:27:43.125400 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:27:43.135256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:27:43.137612 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:27:43.138662 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:27:43.138707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:27:43.138731 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:27:43.145680 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:27:43.147306 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:27:43.151955 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jul 2 08:27:43.151989 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:27:43.152719 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:27:43.153258 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:27:43.155230 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:27:43.157143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:27:43.210985 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:27:43.215371 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:27:43.219274 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:27:43.223339 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:27:43.303018 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:27:43.315313 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:27:43.316876 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:27:43.322193 kernel: BTRFS info (device vda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:27:43.343495 ignition[914]: INFO : Ignition 2.18.0 Jul 2 08:27:43.343495 ignition[914]: INFO : Stage: mount Jul 2 08:27:43.345072 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:27:43.345072 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:27:43.345072 ignition[914]: INFO : mount: mount passed Jul 2 08:27:43.345072 ignition[914]: INFO : Ignition finished successfully Jul 2 08:27:43.345343 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:27:43.347264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:27:43.355278 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:27:43.785243 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:27:43.795370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:27:43.801182 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Jul 2 08:27:43.802791 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:27:43.802812 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:27:43.802823 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:27:43.805191 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:27:43.806488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:27:43.822747 ignition[945]: INFO : Ignition 2.18.0 Jul 2 08:27:43.822747 ignition[945]: INFO : Stage: files Jul 2 08:27:43.823976 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:27:43.823976 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:27:43.823976 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:27:43.826651 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:27:43.826651 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:27:43.826651 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:27:43.830005 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:27:43.830005 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:27:43.830005 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:27:43.830005 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:27:43.827034 unknown[945]: wrote ssh authorized keys file for user: core Jul 2 08:27:43.861529 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:27:43.899646 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:27:43.899646 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:27:43.902613 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 08:27:44.208363 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:27:44.265940 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:27:44.267507 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 08:27:44.494971 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 08:27:44.687104 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:27:44.687104 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 08:27:44.689805 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:27:44.689805 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:27:44.689805 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 08:27:44.689805 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 08:27:44.689805 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:27:44.689805 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:27:44.689805 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 08:27:44.689805 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 08:27:44.716253 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:27:44.720009 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:27:44.721133 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 08:27:44.721133 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:27:44.721133 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:27:44.721133 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:27:44.721133 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:27:44.721133 ignition[945]: INFO : files: files passed Jul 2 08:27:44.721133 ignition[945]: INFO : Ignition finished successfully Jul 2 08:27:44.722267 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:27:44.731345 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:27:44.732940 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:27:44.735086 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:27:44.735197 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:27:44.740862 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 08:27:44.744277 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:27:44.744277 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:27:44.747156 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:27:44.747084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:27:44.748382 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:27:44.759309 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:27:44.777696 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:27:44.777800 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:27:44.779669 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:27:44.781334 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:27:44.782855 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:27:44.783653 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:27:44.798757 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:27:44.800859 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:27:44.812389 systemd[1]: Stopped target network.target - Network. Jul 2 08:27:44.813130 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:27:44.814571 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:27:44.816259 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:27:44.817933 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:27:44.818049 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:27:44.820428 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:27:44.822238 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:27:44.823726 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:27:44.825098 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:27:44.826728 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:27:44.828453 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:27:44.830088 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:27:44.831694 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:27:44.833074 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:27:44.833693 systemd-networkd[768]: eth0: Gained IPv6LL Jul 2 08:27:44.834589 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:27:44.835848 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:27:44.835958 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:27:44.837754 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:27:44.838610 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:27:44.840277 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:27:44.840359 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:27:44.841927 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:27:44.842034 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:27:44.844502 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:27:44.844614 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:27:44.846182 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:27:44.847921 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:27:44.852220 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:27:44.853159 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:27:44.854483 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:27:44.856238 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:27:44.856326 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:27:44.857467 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:27:44.857544 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:27:44.858939 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:27:44.859044 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:27:44.860489 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:27:44.860584 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:27:44.873346 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:27:44.874746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:27:44.875614 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:27:44.877024 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:27:44.878650 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:27:44.878773 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:27:44.880581 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:27:44.880779 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:27:44.883970 systemd-networkd[768]: eth0: DHCPv6 lease lost Jul 2 08:27:44.886727 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:27:44.888029 ignition[999]: INFO : Ignition 2.18.0 Jul 2 08:27:44.888029 ignition[999]: INFO : Stage: umount Jul 2 08:27:44.890783 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:27:44.890783 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:27:44.890783 ignition[999]: INFO : umount: umount passed Jul 2 08:27:44.890783 ignition[999]: INFO : Ignition finished successfully Jul 2 08:27:44.888205 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:27:44.890840 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:27:44.891340 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:27:44.891440 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:27:44.893118 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:27:44.893218 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:27:44.895071 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:27:44.895178 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:27:44.897113 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:27:44.897222 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:27:44.900109 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:27:44.900177 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:27:44.901067 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:27:44.901110 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:27:44.902743 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:27:44.902788 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:27:44.904390 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:27:44.904430 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:27:44.905883 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:27:44.905924 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:27:44.907858 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:27:44.907904 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:27:44.924312 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:27:44.925098 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:27:44.925187 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:27:44.926978 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:27:44.927021 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:27:44.928649 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:27:44.928690 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:27:44.930663 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:27:44.930699 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:27:44.932380 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:27:44.948279 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:27:44.948437 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:27:44.949737 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:27:44.949776 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:27:44.951091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:27:44.951123 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:27:44.953004 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:27:44.953053 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:27:44.955680 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:27:44.955724 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:27:44.958181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:27:44.958227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:27:44.974365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:27:44.975251 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:27:44.975311 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:27:44.977088 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 08:27:44.977128 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:27:44.978964 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:27:44.979006 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:27:44.980860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:27:44.980894 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:27:44.982861 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:27:44.982941 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:27:44.984533 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:27:44.984603 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:27:44.986762 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:27:44.988517 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:27:44.998520 systemd[1]: Switching root. Jul 2 08:27:45.019240 systemd-journald[237]: Journal stopped Jul 2 08:27:45.705282 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 2 08:27:45.705336 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:27:45.705349 kernel: SELinux: policy capability open_perms=1 Jul 2 08:27:45.705363 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:27:45.705376 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:27:45.705389 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:27:45.705402 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:27:45.705412 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:27:45.705421 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:27:45.705431 kernel: audit: type=1403 audit(1719908865.180:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:27:45.705442 systemd[1]: Successfully loaded SELinux policy in 33.304ms. Jul 2 08:27:45.705454 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.017ms. Jul 2 08:27:45.705470 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:27:45.705480 systemd[1]: Detected virtualization kvm. Jul 2 08:27:45.705493 systemd[1]: Detected architecture arm64. Jul 2 08:27:45.705505 systemd[1]: Detected first boot. Jul 2 08:27:45.705516 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:27:45.705526 zram_generator::config[1042]: No configuration found. Jul 2 08:27:45.705538 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:27:45.705548 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:27:45.705558 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 08:27:45.705569 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:27:45.705581 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:27:45.705592 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:27:45.705602 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:27:45.705612 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:27:45.705622 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:27:45.705633 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:27:45.705643 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:27:45.705653 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:27:45.705664 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:27:45.705676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:27:45.705687 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:27:45.705698 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:27:45.705709 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:27:45.705720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:27:45.705732 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 08:27:45.705743 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:27:45.705753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 08:27:45.705764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 08:27:45.705776 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 08:27:45.705786 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:27:45.705796 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:27:45.705807 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:27:45.705817 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:27:45.705827 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:27:45.705837 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:27:45.705847 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:27:45.705859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:27:45.705870 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:27:45.705880 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:27:45.705890 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:27:45.705900 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:27:45.705910 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:27:45.705920 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:27:45.705931 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:27:45.705941 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:27:45.705955 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:27:45.705965 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:27:45.705976 systemd[1]: Reached target machines.target - Containers. Jul 2 08:27:45.705986 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:27:45.705997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:27:45.706007 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:27:45.706018 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:27:45.706028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:27:45.706039 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:27:45.706050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:27:45.706060 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:27:45.706070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:27:45.706081 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:27:45.706091 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:27:45.706102 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 08:27:45.706112 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:27:45.706122 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:27:45.706133 kernel: fuse: init (API version 7.39) Jul 2 08:27:45.706143 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:27:45.706160 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:27:45.706270 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:27:45.706287 kernel: loop: module loaded Jul 2 08:27:45.706311 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:27:45.706323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:27:45.706358 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:27:45.706370 systemd[1]: Stopped verity-setup.service. Jul 2 08:27:45.706384 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:27:45.706395 kernel: ACPI: bus type drm_connector registered Jul 2 08:27:45.706405 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:27:45.706415 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:27:45.706425 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:27:45.706437 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:27:45.706448 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:27:45.706482 systemd-journald[1105]: Collecting audit messages is disabled. Jul 2 08:27:45.706505 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:27:45.706516 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:27:45.706539 systemd-journald[1105]: Journal started Jul 2 08:27:45.706564 systemd-journald[1105]: Runtime Journal (/run/log/journal/4cb372141474450a863589f09feb0383) is 5.9M, max 47.3M, 41.4M free. Jul 2 08:27:45.527232 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:27:45.547061 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 08:27:45.547420 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:27:45.708890 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:27:45.710514 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:27:45.711868 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:27:45.713319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:27:45.713512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:27:45.714809 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:27:45.714939 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:27:45.716182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:27:45.716310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:27:45.717729 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:27:45.719197 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:27:45.720212 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:27:45.720342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:27:45.721683 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:27:45.722955 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:27:45.724543 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:27:45.736220 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:27:45.751316 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:27:45.753281 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:27:45.754295 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:27:45.754336 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:27:45.756123 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:27:45.758281 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:27:45.760143 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:27:45.761205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:27:45.762509 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:27:45.764573 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:27:45.765441 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:27:45.768352 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:27:45.769568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:27:45.772404 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:27:45.773519 systemd-journald[1105]: Time spent on flushing to /var/log/journal/4cb372141474450a863589f09feb0383 is 18.783ms for 858 entries. Jul 2 08:27:45.773519 systemd-journald[1105]: System Journal (/var/log/journal/4cb372141474450a863589f09feb0383) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:27:45.800491 systemd-journald[1105]: Received client request to flush runtime journal. Jul 2 08:27:45.800536 kernel: loop0: detected capacity change from 0 to 193208 Jul 2 08:27:45.800549 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:27:45.775422 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:27:45.777702 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:27:45.782208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:27:45.783311 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:27:45.786906 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:27:45.787994 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:27:45.789266 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:27:45.795602 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:27:45.804339 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:27:45.810363 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:27:45.810907 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:27:45.812300 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:27:45.821454 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:27:45.825692 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jul 2 08:27:45.825708 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jul 2 08:27:45.826983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:27:45.832045 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:27:45.835369 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:27:45.835907 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:27:45.847349 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:27:45.851197 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 08:27:45.875988 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:27:45.883372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:27:45.887196 kernel: loop2: detected capacity change from 0 to 113672 Jul 2 08:27:45.898554 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jul 2 08:27:45.898570 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jul 2 08:27:45.904260 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:27:45.910201 kernel: loop3: detected capacity change from 0 to 193208 Jul 2 08:27:45.916382 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 08:27:45.920183 kernel: loop5: detected capacity change from 0 to 113672 Jul 2 08:27:45.922744 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 08:27:45.923848 (sd-merge)[1180]: Merged extensions into '/usr'. Jul 2 08:27:45.928012 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:27:45.928029 systemd[1]: Reloading... Jul 2 08:27:45.977197 zram_generator::config[1204]: No configuration found. Jul 2 08:27:46.053438 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:27:46.085482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:27:46.123189 systemd[1]: Reloading finished in 194 ms. Jul 2 08:27:46.159211 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:27:46.160374 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:27:46.175482 systemd[1]: Starting ensure-sysext.service... Jul 2 08:27:46.177426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:27:46.197261 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:27:46.197540 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:27:46.198232 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:27:46.198460 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 2 08:27:46.198514 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 2 08:27:46.200772 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:27:46.200787 systemd-tmpfiles[1240]: Skipping /boot Jul 2 08:27:46.201785 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:27:46.201854 systemd[1]: Reloading... Jul 2 08:27:46.207796 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:27:46.207817 systemd-tmpfiles[1240]: Skipping /boot Jul 2 08:27:46.247198 zram_generator::config[1271]: No configuration found. Jul 2 08:27:46.322574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:27:46.360127 systemd[1]: Reloading finished in 157 ms. Jul 2 08:27:46.377392 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:27:46.389593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:27:46.397210 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:27:46.399398 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:27:46.403447 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:27:46.406134 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:27:46.411446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:27:46.416429 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:27:46.421093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:27:46.422213 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:27:46.426430 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:27:46.430741 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:27:46.432338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:27:46.436450 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:27:46.440023 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:27:46.441896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:27:46.444097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:27:46.445825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:27:46.445964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:27:46.447807 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:27:46.449249 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:27:46.454647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:27:46.454904 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:27:46.456894 systemd-udevd[1307]: Using default interface naming scheme 'v255'. Jul 2 08:27:46.473030 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:27:46.474665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:27:46.477426 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:27:46.492834 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:27:46.511642 systemd[1]: Finished ensure-sysext.service. Jul 2 08:27:46.513235 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:27:46.517478 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:27:46.517701 augenrules[1356]: No rules Jul 2 08:27:46.518677 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:27:46.526561 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 08:27:46.530283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1331) Jul 2 08:27:46.533940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:27:46.542195 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1353) Jul 2 08:27:46.552417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:27:46.558422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:27:46.563352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:27:46.568844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:27:46.570450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:27:46.572515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:27:46.576002 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 08:27:46.577084 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:27:46.577651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:27:46.579214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:27:46.580804 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:27:46.580942 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:27:46.582857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:27:46.583690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:27:46.586343 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:27:46.586473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:27:46.602675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 08:27:46.612414 systemd-resolved[1306]: Positive Trust Anchors: Jul 2 08:27:46.612430 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:27:46.612461 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:27:46.617689 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:27:46.619411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:27:46.619459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:27:46.633431 systemd-resolved[1306]: Defaulting to hostname 'linux'. Jul 2 08:27:46.640771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:27:46.641767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:27:46.644931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:27:46.647278 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:27:46.661492 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:27:46.661881 systemd-networkd[1375]: lo: Link UP Jul 2 08:27:46.661887 systemd-networkd[1375]: lo: Gained carrier Jul 2 08:27:46.662615 systemd-networkd[1375]: Enumeration completed Jul 2 08:27:46.663152 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:27:46.663160 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:27:46.663319 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:27:46.664031 systemd-networkd[1375]: eth0: Link UP Jul 2 08:27:46.664093 systemd-networkd[1375]: eth0: Gained carrier Jul 2 08:27:46.664181 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:27:46.664422 systemd[1]: Reached target network.target - Network. Jul 2 08:27:46.669356 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:27:46.671326 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:27:46.673520 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 08:27:46.674511 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:27:46.683248 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 08:27:46.684223 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Jul 2 08:27:46.685025 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:27:46.685383 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 08:27:46.685619 systemd-timesyncd[1376]: Initial clock synchronization to Tue 2024-07-02 08:27:46.313083 UTC. Jul 2 08:27:46.701332 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:27:46.722733 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:27:46.724084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:27:46.725086 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:27:46.726121 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:27:46.727307 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:27:46.728625 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:27:46.729770 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:27:46.731075 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:27:46.732244 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:27:46.732285 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:27:46.733093 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:27:46.734667 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:27:46.736866 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:27:46.742052 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:27:46.743974 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:27:46.745237 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:27:46.746071 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:27:46.746774 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:27:46.747475 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:27:46.747501 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:27:46.748319 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:27:46.749945 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:27:46.752323 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:27:46.752438 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:27:46.756351 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:27:46.757090 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:27:46.758962 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:27:46.762694 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:27:46.765769 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:27:46.768444 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:27:46.771001 jq[1404]: false Jul 2 08:27:46.774118 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:27:46.783643 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:27:46.784064 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:27:46.784735 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:27:46.786434 extend-filesystems[1405]: Found loop3 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found loop4 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found loop5 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found vda Jul 2 08:27:46.791430 extend-filesystems[1405]: Found vda1 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found vda2 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found vda3 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found usr Jul 2 08:27:46.791430 extend-filesystems[1405]: Found vda4 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found vda6 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found vda7 Jul 2 08:27:46.791430 extend-filesystems[1405]: Found vda9 Jul 2 08:27:46.791430 extend-filesystems[1405]: Checking size of /dev/vda9 Jul 2 08:27:46.787638 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:27:46.806393 dbus-daemon[1403]: [system] SELinux support is enabled Jul 2 08:27:46.812335 jq[1420]: true Jul 2 08:27:46.793440 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:27:46.798541 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:27:46.798707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:27:46.799110 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:27:46.799286 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:27:46.801685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:27:46.801839 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:27:46.807086 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:27:46.819681 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:27:46.819718 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:27:46.824744 jq[1425]: true Jul 2 08:27:46.823371 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:27:46.823395 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:27:46.825108 (ntainerd)[1428]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:27:46.836903 extend-filesystems[1405]: Resized partition /dev/vda9 Jul 2 08:27:46.844457 tar[1424]: linux-arm64/helm Jul 2 08:27:46.847892 extend-filesystems[1441]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:27:46.849533 systemd-logind[1412]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:27:46.849718 systemd-logind[1412]: New seat seat0. Jul 2 08:27:46.850998 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:27:46.856223 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 08:27:46.879280 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1331) Jul 2 08:27:46.880216 update_engine[1418]: I0702 08:27:46.880002 1418 main.cc:92] Flatcar Update Engine starting Jul 2 08:27:46.883346 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:27:46.887491 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 08:27:46.887536 update_engine[1418]: I0702 08:27:46.885336 1418 update_check_scheduler.cc:74] Next update check in 6m12s Jul 2 08:27:46.894445 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:27:46.907058 extend-filesystems[1441]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 08:27:46.907058 extend-filesystems[1441]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:27:46.907058 extend-filesystems[1441]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 08:27:46.906910 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:27:46.917535 extend-filesystems[1405]: Resized filesystem in /dev/vda9 Jul 2 08:27:46.908503 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:27:46.921762 bash[1456]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:27:46.924462 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:27:46.926470 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 08:27:46.970501 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:27:47.044258 containerd[1428]: time="2024-07-02T08:27:47.044162860Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:27:47.071480 containerd[1428]: time="2024-07-02T08:27:47.071426970Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:27:47.071480 containerd[1428]: time="2024-07-02T08:27:47.071476780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073058 containerd[1428]: time="2024-07-02T08:27:47.072847340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073058 containerd[1428]: time="2024-07-02T08:27:47.072900468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073178 containerd[1428]: time="2024-07-02T08:27:47.073100697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073178 containerd[1428]: time="2024-07-02T08:27:47.073124076Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:27:47.073229 containerd[1428]: time="2024-07-02T08:27:47.073205541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073303 containerd[1428]: time="2024-07-02T08:27:47.073250812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073303 containerd[1428]: time="2024-07-02T08:27:47.073267670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073346 containerd[1428]: time="2024-07-02T08:27:47.073320721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073503 containerd[1428]: time="2024-07-02T08:27:47.073485595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073538 containerd[1428]: time="2024-07-02T08:27:47.073506190Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:27:47.073538 containerd[1428]: time="2024-07-02T08:27:47.073515840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073632 containerd[1428]: time="2024-07-02T08:27:47.073609699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:27:47.073632 containerd[1428]: time="2024-07-02T08:27:47.073628159Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:27:47.073690 containerd[1428]: time="2024-07-02T08:27:47.073675947Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:27:47.073690 containerd[1428]: time="2024-07-02T08:27:47.073686855Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:27:47.076544 containerd[1428]: time="2024-07-02T08:27:47.076514704Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:27:47.076544 containerd[1428]: time="2024-07-02T08:27:47.076548075Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:27:47.076618 containerd[1428]: time="2024-07-02T08:27:47.076568327Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:27:47.076618 containerd[1428]: time="2024-07-02T08:27:47.076605551Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:27:47.076687 containerd[1428]: time="2024-07-02T08:27:47.076619586Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:27:47.076687 containerd[1428]: time="2024-07-02T08:27:47.076629387Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:27:47.076687 containerd[1428]: time="2024-07-02T08:27:47.076640371Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:27:47.076815 containerd[1428]: time="2024-07-02T08:27:47.076794949Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:27:47.076843 containerd[1428]: time="2024-07-02T08:27:47.076815658Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:27:47.076843 containerd[1428]: time="2024-07-02T08:27:47.076828129Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:27:47.076886 containerd[1428]: time="2024-07-02T08:27:47.076840753Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:27:47.076886 containerd[1428]: time="2024-07-02T08:27:47.076856200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:27:47.076886 containerd[1428]: time="2024-07-02T08:27:47.076872371Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:27:47.076886 containerd[1428]: time="2024-07-02T08:27:47.076883774Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:27:47.076949 containerd[1428]: time="2024-07-02T08:27:47.076895330Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:27:47.076949 containerd[1428]: time="2024-07-02T08:27:47.076907764Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:27:47.076949 containerd[1428]: time="2024-07-02T08:27:47.076919663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:27:47.076949 containerd[1428]: time="2024-07-02T08:27:47.076930723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:27:47.076949 containerd[1428]: time="2024-07-02T08:27:47.076942508Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:27:47.077116 containerd[1428]: time="2024-07-02T08:27:47.077034766Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:27:47.077351 containerd[1428]: time="2024-07-02T08:27:47.077330076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:27:47.077408 containerd[1428]: time="2024-07-02T08:27:47.077361235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077408 containerd[1428]: time="2024-07-02T08:27:47.077373859Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:27:47.077408 containerd[1428]: time="2024-07-02T08:27:47.077402273Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:27:47.077526 containerd[1428]: time="2024-07-02T08:27:47.077513295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077555 containerd[1428]: time="2024-07-02T08:27:47.077528284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077555 containerd[1428]: time="2024-07-02T08:27:47.077539916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077555 containerd[1428]: time="2024-07-02T08:27:47.077550328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077613 containerd[1428]: time="2024-07-02T08:27:47.077562228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077613 containerd[1428]: time="2024-07-02T08:27:47.077573669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077613 containerd[1428]: time="2024-07-02T08:27:47.077584386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077613 containerd[1428]: time="2024-07-02T08:27:47.077595447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077613 containerd[1428]: time="2024-07-02T08:27:47.077607498Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:27:47.077773 containerd[1428]: time="2024-07-02T08:27:47.077747240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077773 containerd[1428]: time="2024-07-02T08:27:47.077769475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077819 containerd[1428]: time="2024-07-02T08:27:47.077781183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077819 containerd[1428]: time="2024-07-02T08:27:47.077800405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077819 containerd[1428]: time="2024-07-02T08:27:47.077813792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077867 containerd[1428]: time="2024-07-02T08:27:47.077827675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077867 containerd[1428]: time="2024-07-02T08:27:47.077841061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.077867 containerd[1428]: time="2024-07-02T08:27:47.077853037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:27:47.078279 containerd[1428]: time="2024-07-02T08:27:47.078219705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:27:47.078279 containerd[1428]: time="2024-07-02T08:27:47.078278629Z" level=info msg="Connect containerd service" Jul 2 08:27:47.078428 containerd[1428]: time="2024-07-02T08:27:47.078302962Z" level=info msg="using legacy CRI server" Jul 2 08:27:47.078428 containerd[1428]: time="2024-07-02T08:27:47.078310475Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:27:47.079225 containerd[1428]: time="2024-07-02T08:27:47.078745335Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:27:47.080245 containerd[1428]: time="2024-07-02T08:27:47.080147933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:27:47.080364 containerd[1428]: time="2024-07-02T08:27:47.080342899Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:27:47.080395 containerd[1428]: time="2024-07-02T08:27:47.080380618Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:27:47.080415 containerd[1428]: time="2024-07-02T08:27:47.080397895Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:27:47.080448 containerd[1428]: time="2024-07-02T08:27:47.080413303Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:27:47.080884 containerd[1428]: time="2024-07-02T08:27:47.080861550Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:27:47.080937 containerd[1428]: time="2024-07-02T08:27:47.080913152Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:27:47.082180 containerd[1428]: time="2024-07-02T08:27:47.081936343Z" level=info msg="Start subscribing containerd event" Jul 2 08:27:47.082180 containerd[1428]: time="2024-07-02T08:27:47.081986877Z" level=info msg="Start recovering state" Jul 2 08:27:47.082180 containerd[1428]: time="2024-07-02T08:27:47.082048701Z" level=info msg="Start event monitor" Jul 2 08:27:47.082180 containerd[1428]: time="2024-07-02T08:27:47.082059227Z" level=info msg="Start snapshots syncer" Jul 2 08:27:47.082180 containerd[1428]: time="2024-07-02T08:27:47.082067846Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:27:47.082180 containerd[1428]: time="2024-07-02T08:27:47.082079250Z" level=info msg="Start streaming server" Jul 2 08:27:47.082331 containerd[1428]: time="2024-07-02T08:27:47.082230242Z" level=info msg="containerd successfully booted in 0.039091s" Jul 2 08:27:47.082398 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:27:47.207335 tar[1424]: linux-arm64/LICENSE Jul 2 08:27:47.207335 tar[1424]: linux-arm64/README.md Jul 2 08:27:47.220697 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:27:47.969267 systemd-networkd[1375]: eth0: Gained IPv6LL Jul 2 08:27:47.971194 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:27:47.973685 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:27:47.985744 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 08:27:47.989718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:27:47.992009 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:27:48.019218 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:27:48.020671 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 08:27:48.020832 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 08:27:48.024629 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:27:48.185127 sshd_keygen[1422]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:27:48.205232 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:27:48.217432 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:27:48.223037 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:27:48.223267 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:27:48.225725 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:27:48.242636 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:27:48.255795 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:27:48.258510 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 08:27:48.259750 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:27:48.460653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:27:48.461846 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:27:48.464140 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:27:48.464605 systemd[1]: Startup finished in 542ms (kernel) + 4.473s (initrd) + 3.319s (userspace) = 8.334s. Jul 2 08:27:48.956052 kubelet[1516]: E0702 08:27:48.955921 1516 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:27:48.958780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:27:48.958926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:27:53.069860 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:27:53.070935 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:54814.service - OpenSSH per-connection server daemon (10.0.0.1:54814). Jul 2 08:27:53.128424 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 54814 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:27:53.130208 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:53.142386 systemd-logind[1412]: New session 1 of user core. Jul 2 08:27:53.143384 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:27:53.151401 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:27:53.167200 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:27:53.169445 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:27:53.176382 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:53.261755 systemd[1534]: Queued start job for default target default.target. Jul 2 08:27:53.280221 systemd[1534]: Created slice app.slice - User Application Slice. Jul 2 08:27:53.280249 systemd[1534]: Reached target paths.target - Paths. Jul 2 08:27:53.280261 systemd[1534]: Reached target timers.target - Timers. Jul 2 08:27:53.281465 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:27:53.291684 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:27:53.291747 systemd[1534]: Reached target sockets.target - Sockets. Jul 2 08:27:53.291760 systemd[1534]: Reached target basic.target - Basic System. Jul 2 08:27:53.291795 systemd[1534]: Reached target default.target - Main User Target. Jul 2 08:27:53.291822 systemd[1534]: Startup finished in 109ms. Jul 2 08:27:53.292104 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:27:53.303385 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:27:53.360479 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:54818.service - OpenSSH per-connection server daemon (10.0.0.1:54818). Jul 2 08:27:53.391569 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 54818 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:27:53.392691 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:53.396444 systemd-logind[1412]: New session 2 of user core. Jul 2 08:27:53.405339 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:27:53.455479 sshd[1545]: pam_unix(sshd:session): session closed for user core Jul 2 08:27:53.468429 systemd[1]: sshd@1-10.0.0.104:22-10.0.0.1:54818.service: Deactivated successfully. Jul 2 08:27:53.470376 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:27:53.471596 systemd-logind[1412]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:27:53.473073 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:54822.service - OpenSSH per-connection server daemon (10.0.0.1:54822). Jul 2 08:27:53.475230 systemd-logind[1412]: Removed session 2. Jul 2 08:27:53.505661 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 54822 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:27:53.506897 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:53.511900 systemd-logind[1412]: New session 3 of user core. Jul 2 08:27:53.521314 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:27:53.567178 sshd[1552]: pam_unix(sshd:session): session closed for user core Jul 2 08:27:53.575260 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:54822.service: Deactivated successfully. Jul 2 08:27:53.576757 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:27:53.577945 systemd-logind[1412]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:27:53.586423 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:54838.service - OpenSSH per-connection server daemon (10.0.0.1:54838). Jul 2 08:27:53.587057 systemd-logind[1412]: Removed session 3. Jul 2 08:27:53.614446 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 54838 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:27:53.615482 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:53.619364 systemd-logind[1412]: New session 4 of user core. Jul 2 08:27:53.631353 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:27:53.681760 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 2 08:27:53.691360 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:54838.service: Deactivated successfully. Jul 2 08:27:53.692846 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:27:53.693998 systemd-logind[1412]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:27:53.695103 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:54848.service - OpenSSH per-connection server daemon (10.0.0.1:54848). Jul 2 08:27:53.695788 systemd-logind[1412]: Removed session 4. Jul 2 08:27:53.727455 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 54848 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:27:53.728560 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:53.731892 systemd-logind[1412]: New session 5 of user core. Jul 2 08:27:53.746373 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:27:53.811020 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:27:53.811272 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:27:53.828832 sudo[1571]: pam_unix(sudo:session): session closed for user root Jul 2 08:27:53.830419 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 2 08:27:53.842442 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:54848.service: Deactivated successfully. Jul 2 08:27:53.844466 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:27:53.845716 systemd-logind[1412]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:27:53.847367 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:54864.service - OpenSSH per-connection server daemon (10.0.0.1:54864). Jul 2 08:27:53.848213 systemd-logind[1412]: Removed session 5. Jul 2 08:27:53.880101 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 54864 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:27:53.881569 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:53.885176 systemd-logind[1412]: New session 6 of user core. Jul 2 08:27:53.894314 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:27:53.942894 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:27:53.943123 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:27:53.945919 sudo[1580]: pam_unix(sudo:session): session closed for user root Jul 2 08:27:53.950003 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:27:53.950486 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:27:53.969397 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:27:53.970504 auditctl[1583]: No rules Jul 2 08:27:53.971357 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:27:53.972248 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:27:53.974322 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:27:53.996953 augenrules[1601]: No rules Jul 2 08:27:53.998180 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:27:53.999249 sudo[1579]: pam_unix(sudo:session): session closed for user root Jul 2 08:27:54.001374 sshd[1576]: pam_unix(sshd:session): session closed for user core Jul 2 08:27:54.012431 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:54864.service: Deactivated successfully. Jul 2 08:27:54.013717 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:27:54.016307 systemd-logind[1412]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:27:54.022483 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:54880.service - OpenSSH per-connection server daemon (10.0.0.1:54880). Jul 2 08:27:54.023418 systemd-logind[1412]: Removed session 6. Jul 2 08:27:54.052157 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 54880 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:27:54.053276 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:54.057068 systemd-logind[1412]: New session 7 of user core. Jul 2 08:27:54.070309 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:27:54.119482 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:27:54.119728 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:27:54.234402 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:27:54.234571 (dockerd)[1623]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:27:54.464482 dockerd[1623]: time="2024-07-02T08:27:54.464418431Z" level=info msg="Starting up" Jul 2 08:27:54.560508 dockerd[1623]: time="2024-07-02T08:27:54.560412928Z" level=info msg="Loading containers: start." Jul 2 08:27:54.636184 kernel: Initializing XFRM netlink socket Jul 2 08:27:54.706738 systemd-networkd[1375]: docker0: Link UP Jul 2 08:27:54.714746 dockerd[1623]: time="2024-07-02T08:27:54.714685379Z" level=info msg="Loading containers: done." Jul 2 08:27:54.780427 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2281681093-merged.mount: Deactivated successfully. Jul 2 08:27:54.781338 dockerd[1623]: time="2024-07-02T08:27:54.781272140Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:27:54.781546 dockerd[1623]: time="2024-07-02T08:27:54.781524759Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:27:54.781658 dockerd[1623]: time="2024-07-02T08:27:54.781641074Z" level=info msg="Daemon has completed initialization" Jul 2 08:27:54.807245 dockerd[1623]: time="2024-07-02T08:27:54.807185902Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:27:54.807397 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:27:55.395102 containerd[1428]: time="2024-07-02T08:27:55.394991661Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 08:27:56.023555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592408091.mount: Deactivated successfully. Jul 2 08:27:57.134478 containerd[1428]: time="2024-07-02T08:27:57.134425752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:57.135364 containerd[1428]: time="2024-07-02T08:27:57.135324495Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jul 2 08:27:57.136584 containerd[1428]: time="2024-07-02T08:27:57.136293923Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:57.140337 containerd[1428]: time="2024-07-02T08:27:57.140303517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:57.142785 containerd[1428]: time="2024-07-02T08:27:57.142750001Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 1.74771548s" Jul 2 08:27:57.142886 containerd[1428]: time="2024-07-02T08:27:57.142869283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 08:27:57.163399 containerd[1428]: time="2024-07-02T08:27:57.163365973Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 08:27:58.496797 containerd[1428]: time="2024-07-02T08:27:58.496747313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:58.497223 containerd[1428]: time="2024-07-02T08:27:58.497202268Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jul 2 08:27:58.497963 containerd[1428]: time="2024-07-02T08:27:58.497936835Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:58.501463 containerd[1428]: time="2024-07-02T08:27:58.501401487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:58.502493 containerd[1428]: time="2024-07-02T08:27:58.502360228Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.338956271s" Jul 2 08:27:58.502493 containerd[1428]: time="2024-07-02T08:27:58.502394220Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 08:27:58.521425 containerd[1428]: time="2024-07-02T08:27:58.521343554Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 08:27:59.209243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:27:59.220336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:27:59.304733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:27:59.307862 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:27:59.352074 kubelet[1847]: E0702 08:27:59.351922 1847 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:27:59.355823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:27:59.355952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:27:59.480356 containerd[1428]: time="2024-07-02T08:27:59.479997998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:59.480544 containerd[1428]: time="2024-07-02T08:27:59.480505834Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jul 2 08:27:59.481491 containerd[1428]: time="2024-07-02T08:27:59.481466110Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:59.484181 containerd[1428]: time="2024-07-02T08:27:59.484139064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:27:59.485396 containerd[1428]: time="2024-07-02T08:27:59.485358845Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 963.976572ms" Jul 2 08:27:59.485396 containerd[1428]: time="2024-07-02T08:27:59.485396172Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 08:27:59.504972 containerd[1428]: time="2024-07-02T08:27:59.504933114Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 08:28:01.697996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046163830.mount: Deactivated successfully. Jul 2 08:28:02.073907 containerd[1428]: time="2024-07-02T08:28:02.073786116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:02.075258 containerd[1428]: time="2024-07-02T08:28:02.075219063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jul 2 08:28:02.077495 containerd[1428]: time="2024-07-02T08:28:02.077437448Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:02.079716 containerd[1428]: time="2024-07-02T08:28:02.079660881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:02.080275 containerd[1428]: time="2024-07-02T08:28:02.080239227Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 2.575255762s" Jul 2 08:28:02.080323 containerd[1428]: time="2024-07-02T08:28:02.080273968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 08:28:02.098275 containerd[1428]: time="2024-07-02T08:28:02.098246741Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:28:02.555339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325790072.mount: Deactivated successfully. Jul 2 08:28:02.561057 containerd[1428]: time="2024-07-02T08:28:02.560999214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:02.562368 containerd[1428]: time="2024-07-02T08:28:02.562329847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 08:28:02.563992 containerd[1428]: time="2024-07-02T08:28:02.563934708Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:02.566183 containerd[1428]: time="2024-07-02T08:28:02.565854977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:02.566926 containerd[1428]: time="2024-07-02T08:28:02.566808870Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 468.428811ms" Jul 2 08:28:02.566926 containerd[1428]: time="2024-07-02T08:28:02.566840948Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:28:02.592813 containerd[1428]: time="2024-07-02T08:28:02.592634353Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:28:03.113742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1957177792.mount: Deactivated successfully. Jul 2 08:28:04.540026 containerd[1428]: time="2024-07-02T08:28:04.539971922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:04.540560 containerd[1428]: time="2024-07-02T08:28:04.540517606Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 08:28:04.541426 containerd[1428]: time="2024-07-02T08:28:04.541372159Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:04.544308 containerd[1428]: time="2024-07-02T08:28:04.544276770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:04.546530 containerd[1428]: time="2024-07-02T08:28:04.546485343Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.953811265s" Jul 2 08:28:04.546530 containerd[1428]: time="2024-07-02T08:28:04.546523678Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 08:28:04.564603 containerd[1428]: time="2024-07-02T08:28:04.564574257Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 08:28:05.138336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449085704.mount: Deactivated successfully. Jul 2 08:28:06.417744 containerd[1428]: time="2024-07-02T08:28:06.417700621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:06.418762 containerd[1428]: time="2024-07-02T08:28:06.418710810Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jul 2 08:28:06.419688 containerd[1428]: time="2024-07-02T08:28:06.419588648Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:06.422265 containerd[1428]: time="2024-07-02T08:28:06.422227780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:06.423247 containerd[1428]: time="2024-07-02T08:28:06.423220951Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.858499392s" Jul 2 08:28:06.423437 containerd[1428]: time="2024-07-02T08:28:06.423314007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 08:28:09.606321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:28:09.615482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:28:09.743133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:28:09.746695 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:28:09.788784 kubelet[2029]: E0702 08:28:09.788731 2029 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:28:09.791053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:28:09.791195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:28:12.771269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:28:12.783336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:28:12.793506 systemd[1]: Reloading requested from client PID 2044 ('systemctl') (unit session-7.scope)... Jul 2 08:28:12.793520 systemd[1]: Reloading... Jul 2 08:28:12.863199 zram_generator::config[2081]: No configuration found. Jul 2 08:28:12.966994 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:28:13.020872 systemd[1]: Reloading finished in 227 ms. Jul 2 08:28:13.064862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:28:13.066337 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:28:13.068582 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:28:13.070197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:28:13.071594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:28:13.166252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:28:13.170459 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:28:13.211698 kubelet[2128]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:28:13.211698 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:28:13.211698 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:28:13.212094 kubelet[2128]: I0702 08:28:13.211737 2128 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:28:14.769005 kubelet[2128]: I0702 08:28:14.768946 2128 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:28:14.769005 kubelet[2128]: I0702 08:28:14.768974 2128 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:28:14.769360 kubelet[2128]: I0702 08:28:14.769223 2128 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:28:14.793458 kubelet[2128]: I0702 08:28:14.793221 2128 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:28:14.795610 kubelet[2128]: E0702 08:28:14.795581 2128 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.805204 kubelet[2128]: W0702 08:28:14.805172 2128 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 08:28:14.805882 kubelet[2128]: I0702 08:28:14.805861 2128 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:28:14.806073 kubelet[2128]: I0702 08:28:14.806064 2128 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:28:14.806327 kubelet[2128]: I0702 08:28:14.806300 2128 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:28:14.806327 kubelet[2128]: I0702 08:28:14.806330 2128 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:28:14.806443 kubelet[2128]: I0702 08:28:14.806339 2128 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:28:14.806580 kubelet[2128]: I0702 08:28:14.806566 2128 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:28:14.808750 kubelet[2128]: I0702 08:28:14.808721 2128 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:28:14.808750 kubelet[2128]: I0702 08:28:14.808752 2128 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:28:14.808881 kubelet[2128]: I0702 08:28:14.808858 2128 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:28:14.808881 kubelet[2128]: I0702 08:28:14.808875 2128 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:28:14.809271 kubelet[2128]: W0702 08:28:14.809217 2128 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.809302 kubelet[2128]: E0702 08:28:14.809278 2128 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.809615 kubelet[2128]: W0702 08:28:14.809570 2128 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.809615 kubelet[2128]: E0702 08:28:14.809613 2128 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.816834 kubelet[2128]: I0702 08:28:14.816801 2128 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:28:14.820168 kubelet[2128]: W0702 08:28:14.820134 2128 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:28:14.820809 kubelet[2128]: I0702 08:28:14.820790 2128 server.go:1232] "Started kubelet" Jul 2 08:28:14.821782 kubelet[2128]: I0702 08:28:14.821755 2128 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:28:14.821877 kubelet[2128]: I0702 08:28:14.821759 2128 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:28:14.822047 kubelet[2128]: I0702 08:28:14.822026 2128 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:28:14.822672 kubelet[2128]: I0702 08:28:14.822637 2128 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:28:14.823005 kubelet[2128]: I0702 08:28:14.822991 2128 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:28:14.823326 kubelet[2128]: I0702 08:28:14.823312 2128 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:28:14.824236 kubelet[2128]: E0702 08:28:14.824211 2128 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" Jul 2 08:28:14.824484 kubelet[2128]: I0702 08:28:14.824362 2128 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:28:14.824484 kubelet[2128]: E0702 08:28:14.824290 2128 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de580701d39995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 28, 14, 820768149, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 28, 14, 820768149, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.104:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.104:6443: connect: connection refused'(may retry after sleeping) Jul 2 08:28:14.824484 kubelet[2128]: I0702 08:28:14.824422 2128 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:28:14.824788 kubelet[2128]: W0702 08:28:14.824653 2128 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.824788 kubelet[2128]: E0702 08:28:14.824700 2128 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.825468 kubelet[2128]: E0702 08:28:14.823023 2128 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:28:14.825642 kubelet[2128]: E0702 08:28:14.825625 2128 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:28:14.837932 kubelet[2128]: I0702 08:28:14.837897 2128 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:28:14.839074 kubelet[2128]: I0702 08:28:14.839038 2128 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:28:14.839074 kubelet[2128]: I0702 08:28:14.839061 2128 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:28:14.839074 kubelet[2128]: I0702 08:28:14.839079 2128 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:28:14.839209 kubelet[2128]: E0702 08:28:14.839132 2128 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:28:14.840010 kubelet[2128]: W0702 08:28:14.839951 2128 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.840010 kubelet[2128]: E0702 08:28:14.840010 2128 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:14.846618 kubelet[2128]: I0702 08:28:14.846560 2128 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:28:14.846618 kubelet[2128]: I0702 08:28:14.846605 2128 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:28:14.846618 kubelet[2128]: I0702 08:28:14.846622 2128 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:28:14.907314 kubelet[2128]: I0702 08:28:14.907273 2128 policy_none.go:49] "None policy: Start" Jul 2 08:28:14.908053 kubelet[2128]: I0702 08:28:14.907990 2128 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:28:14.908093 kubelet[2128]: I0702 08:28:14.908063 2128 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:28:14.913628 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 08:28:14.914139 kubelet[2128]: E0702 08:28:14.914038 2128 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de580701d39995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 28, 14, 820768149, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 28, 14, 820768149, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.104:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.104:6443: connect: connection refused'(may retry after sleeping) Jul 2 08:28:14.924858 kubelet[2128]: I0702 08:28:14.924383 2128 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:28:14.924858 kubelet[2128]: E0702 08:28:14.924820 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jul 2 08:28:14.927523 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 08:28:14.939362 kubelet[2128]: E0702 08:28:14.939325 2128 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:28:14.939588 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 08:28:14.942610 kubelet[2128]: I0702 08:28:14.942573 2128 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:28:14.942895 kubelet[2128]: I0702 08:28:14.942859 2128 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:28:14.943547 kubelet[2128]: E0702 08:28:14.943529 2128 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 08:28:15.025824 kubelet[2128]: E0702 08:28:15.025706 2128 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" Jul 2 08:28:15.128589 kubelet[2128]: I0702 08:28:15.128552 2128 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:28:15.128927 kubelet[2128]: E0702 08:28:15.128910 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jul 2 08:28:15.140129 kubelet[2128]: I0702 08:28:15.140052 2128 topology_manager.go:215] "Topology Admit Handler" podUID="22d81e7f0edf4ed42deb73c4bcf3ae68" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:28:15.141073 kubelet[2128]: I0702 08:28:15.141010 2128 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:28:15.141863 kubelet[2128]: I0702 08:28:15.141787 2128 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:28:15.147312 systemd[1]: Created slice kubepods-burstable-pod22d81e7f0edf4ed42deb73c4bcf3ae68.slice - libcontainer container kubepods-burstable-pod22d81e7f0edf4ed42deb73c4bcf3ae68.slice. Jul 2 08:28:15.166770 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jul 2 08:28:15.179507 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jul 2 08:28:15.226381 kubelet[2128]: I0702 08:28:15.226342 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22d81e7f0edf4ed42deb73c4bcf3ae68-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"22d81e7f0edf4ed42deb73c4bcf3ae68\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:28:15.226547 kubelet[2128]: I0702 08:28:15.226395 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:15.226547 kubelet[2128]: I0702 08:28:15.226417 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:28:15.226547 kubelet[2128]: I0702 08:28:15.226436 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:15.226547 kubelet[2128]: I0702 08:28:15.226520 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:15.226636 kubelet[2128]: I0702 08:28:15.226558 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:15.226636 kubelet[2128]: I0702 08:28:15.226583 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22d81e7f0edf4ed42deb73c4bcf3ae68-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"22d81e7f0edf4ed42deb73c4bcf3ae68\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:28:15.226636 kubelet[2128]: I0702 08:28:15.226603 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22d81e7f0edf4ed42deb73c4bcf3ae68-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"22d81e7f0edf4ed42deb73c4bcf3ae68\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:28:15.226636 kubelet[2128]: I0702 08:28:15.226620 2128 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:15.426279 kubelet[2128]: E0702 08:28:15.426143 2128 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" Jul 2 08:28:15.467069 kubelet[2128]: E0702 08:28:15.467030 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:15.469580 containerd[1428]: time="2024-07-02T08:28:15.469506889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:22d81e7f0edf4ed42deb73c4bcf3ae68,Namespace:kube-system,Attempt:0,}" Jul 2 08:28:15.477872 kubelet[2128]: E0702 08:28:15.477854 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:15.479030 containerd[1428]: time="2024-07-02T08:28:15.478706819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 08:28:15.481451 kubelet[2128]: E0702 08:28:15.481408 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:15.481878 containerd[1428]: time="2024-07-02T08:28:15.481764032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 08:28:15.530272 kubelet[2128]: I0702 08:28:15.530252 2128 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:28:15.530744 kubelet[2128]: E0702 08:28:15.530721 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jul 2 08:28:15.757110 kubelet[2128]: W0702 08:28:15.756955 2128 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:15.757110 kubelet[2128]: E0702 08:28:15.757019 2128 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:15.794456 kubelet[2128]: W0702 08:28:15.794381 2128 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:15.794456 kubelet[2128]: E0702 08:28:15.794443 2128 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:15.996482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763582272.mount: Deactivated successfully. Jul 2 08:28:15.999371 containerd[1428]: time="2024-07-02T08:28:15.999328544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:28:16.000467 containerd[1428]: time="2024-07-02T08:28:16.000431961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 08:28:16.001992 containerd[1428]: time="2024-07-02T08:28:16.001950388Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:28:16.003761 containerd[1428]: time="2024-07-02T08:28:16.003722231Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:28:16.004646 containerd[1428]: time="2024-07-02T08:28:16.004615765Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:28:16.004935 containerd[1428]: time="2024-07-02T08:28:16.004898571Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:28:16.005777 containerd[1428]: time="2024-07-02T08:28:16.005737878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:28:16.006805 containerd[1428]: time="2024-07-02T08:28:16.006764842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:28:16.009368 containerd[1428]: time="2024-07-02T08:28:16.009189932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.584871ms" Jul 2 08:28:16.010916 containerd[1428]: time="2024-07-02T08:28:16.010774437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.937924ms" Jul 2 08:28:16.014144 containerd[1428]: time="2024-07-02T08:28:16.014106088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.311646ms" Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.175960639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176122443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176136549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176175072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176248440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176303627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176286763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176319252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176329522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176325206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176342230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:28:16.176421 containerd[1428]: time="2024-07-02T08:28:16.176354697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:16.203346 systemd[1]: Started cri-containerd-27c56f40fcbbce58d52b2d787f92feda404ad96cc3b0136cea6ad3857a11e689.scope - libcontainer container 27c56f40fcbbce58d52b2d787f92feda404ad96cc3b0136cea6ad3857a11e689. Jul 2 08:28:16.207144 systemd[1]: Started cri-containerd-5f5300d79656beff0d2f8f69cd5a206b0e80d1d681bf83d06d8e5fac6f8f0cc8.scope - libcontainer container 5f5300d79656beff0d2f8f69cd5a206b0e80d1d681bf83d06d8e5fac6f8f0cc8. Jul 2 08:28:16.208115 systemd[1]: Started cri-containerd-d6bb5cfb2c61836f5caadc5e8d43e5430c2b33fd865ca785a3f4d84f0464719e.scope - libcontainer container d6bb5cfb2c61836f5caadc5e8d43e5430c2b33fd865ca785a3f4d84f0464719e. Jul 2 08:28:16.227526 kubelet[2128]: E0702 08:28:16.227499 2128 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="1.6s" Jul 2 08:28:16.236601 containerd[1428]: time="2024-07-02T08:28:16.236508004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"27c56f40fcbbce58d52b2d787f92feda404ad96cc3b0136cea6ad3857a11e689\"" Jul 2 08:28:16.237945 kubelet[2128]: E0702 08:28:16.237861 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:16.240979 containerd[1428]: time="2024-07-02T08:28:16.240711611Z" level=info msg="CreateContainer within sandbox \"27c56f40fcbbce58d52b2d787f92feda404ad96cc3b0136cea6ad3857a11e689\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:28:16.244031 containerd[1428]: time="2024-07-02T08:28:16.243912829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f5300d79656beff0d2f8f69cd5a206b0e80d1d681bf83d06d8e5fac6f8f0cc8\"" Jul 2 08:28:16.244525 kubelet[2128]: E0702 08:28:16.244503 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:16.245991 containerd[1428]: time="2024-07-02T08:28:16.245962482Z" level=info msg="CreateContainer within sandbox \"5f5300d79656beff0d2f8f69cd5a206b0e80d1d681bf83d06d8e5fac6f8f0cc8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:28:16.247281 containerd[1428]: time="2024-07-02T08:28:16.247235489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:22d81e7f0edf4ed42deb73c4bcf3ae68,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6bb5cfb2c61836f5caadc5e8d43e5430c2b33fd865ca785a3f4d84f0464719e\"" Jul 2 08:28:16.248502 kubelet[2128]: E0702 08:28:16.248467 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:16.250555 containerd[1428]: time="2024-07-02T08:28:16.250513672Z" level=info msg="CreateContainer within sandbox \"d6bb5cfb2c61836f5caadc5e8d43e5430c2b33fd865ca785a3f4d84f0464719e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:28:16.258798 containerd[1428]: time="2024-07-02T08:28:16.258765116Z" level=info msg="CreateContainer within sandbox \"27c56f40fcbbce58d52b2d787f92feda404ad96cc3b0136cea6ad3857a11e689\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a12b37cef7ae2bd9c785e3d8f7162432dcad01c6846a5c9cbca0549ecf44b0d5\"" Jul 2 08:28:16.259980 containerd[1428]: time="2024-07-02T08:28:16.259801551Z" level=info msg="StartContainer for \"a12b37cef7ae2bd9c785e3d8f7162432dcad01c6846a5c9cbca0549ecf44b0d5\"" Jul 2 08:28:16.261416 containerd[1428]: time="2024-07-02T08:28:16.261384777Z" level=info msg="CreateContainer within sandbox \"5f5300d79656beff0d2f8f69cd5a206b0e80d1d681bf83d06d8e5fac6f8f0cc8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a880474c40f075aea996afb917bdb1a0756c806b522395067b2f8605c439ecf\"" Jul 2 08:28:16.262025 containerd[1428]: time="2024-07-02T08:28:16.261974965Z" level=info msg="StartContainer for \"4a880474c40f075aea996afb917bdb1a0756c806b522395067b2f8605c439ecf\"" Jul 2 08:28:16.264242 containerd[1428]: time="2024-07-02T08:28:16.264090515Z" level=info msg="CreateContainer within sandbox \"d6bb5cfb2c61836f5caadc5e8d43e5430c2b33fd865ca785a3f4d84f0464719e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f62096e910dc686a6f6c2107bb4a66b6bfd16073d46425385ae18f430a7fc11b\"" Jul 2 08:28:16.264869 containerd[1428]: time="2024-07-02T08:28:16.264844584Z" level=info msg="StartContainer for \"f62096e910dc686a6f6c2107bb4a66b6bfd16073d46425385ae18f430a7fc11b\"" Jul 2 08:28:16.295331 systemd[1]: Started cri-containerd-4a880474c40f075aea996afb917bdb1a0756c806b522395067b2f8605c439ecf.scope - libcontainer container 4a880474c40f075aea996afb917bdb1a0756c806b522395067b2f8605c439ecf. Jul 2 08:28:16.296381 systemd[1]: Started cri-containerd-a12b37cef7ae2bd9c785e3d8f7162432dcad01c6846a5c9cbca0549ecf44b0d5.scope - libcontainer container a12b37cef7ae2bd9c785e3d8f7162432dcad01c6846a5c9cbca0549ecf44b0d5. Jul 2 08:28:16.299249 systemd[1]: Started cri-containerd-f62096e910dc686a6f6c2107bb4a66b6bfd16073d46425385ae18f430a7fc11b.scope - libcontainer container f62096e910dc686a6f6c2107bb4a66b6bfd16073d46425385ae18f430a7fc11b. Jul 2 08:28:16.329477 containerd[1428]: time="2024-07-02T08:28:16.329391993Z" level=info msg="StartContainer for \"4a880474c40f075aea996afb917bdb1a0756c806b522395067b2f8605c439ecf\" returns successfully" Jul 2 08:28:16.331912 kubelet[2128]: I0702 08:28:16.331813 2128 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:28:16.332586 kubelet[2128]: E0702 08:28:16.332431 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jul 2 08:28:16.334569 containerd[1428]: time="2024-07-02T08:28:16.334522341Z" level=info msg="StartContainer for \"f62096e910dc686a6f6c2107bb4a66b6bfd16073d46425385ae18f430a7fc11b\" returns successfully" Jul 2 08:28:16.350430 containerd[1428]: time="2024-07-02T08:28:16.350388366Z" level=info msg="StartContainer for \"a12b37cef7ae2bd9c785e3d8f7162432dcad01c6846a5c9cbca0549ecf44b0d5\" returns successfully" Jul 2 08:28:16.354922 kubelet[2128]: W0702 08:28:16.354878 2128 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:16.354986 kubelet[2128]: E0702 08:28:16.354931 2128 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:16.389333 kubelet[2128]: W0702 08:28:16.389288 2128 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:16.389333 kubelet[2128]: E0702 08:28:16.389335 2128 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jul 2 08:28:16.850494 kubelet[2128]: E0702 08:28:16.850463 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:16.856590 kubelet[2128]: E0702 08:28:16.856561 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:16.858644 kubelet[2128]: E0702 08:28:16.858620 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:17.861445 kubelet[2128]: E0702 08:28:17.861416 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:17.934302 kubelet[2128]: I0702 08:28:17.934274 2128 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:28:18.837907 kubelet[2128]: E0702 08:28:18.837870 2128 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 08:28:18.897207 kubelet[2128]: I0702 08:28:18.897063 2128 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 08:28:18.904141 kubelet[2128]: E0702 08:28:18.904086 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:28:19.004819 kubelet[2128]: E0702 08:28:19.004782 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:28:19.105572 kubelet[2128]: E0702 08:28:19.105300 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:28:19.206096 kubelet[2128]: E0702 08:28:19.206062 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:28:19.306975 kubelet[2128]: E0702 08:28:19.306929 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:28:19.407664 kubelet[2128]: E0702 08:28:19.407418 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:28:19.777858 kubelet[2128]: E0702 08:28:19.777743 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:19.812730 kubelet[2128]: I0702 08:28:19.812679 2128 apiserver.go:52] "Watching apiserver" Jul 2 08:28:19.825249 kubelet[2128]: I0702 08:28:19.825223 2128 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:28:19.864280 kubelet[2128]: E0702 08:28:19.864155 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:21.276256 systemd[1]: Reloading requested from client PID 2404 ('systemctl') (unit session-7.scope)... Jul 2 08:28:21.276271 systemd[1]: Reloading... Jul 2 08:28:21.326224 zram_generator::config[2441]: No configuration found. Jul 2 08:28:21.406348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:28:21.470911 systemd[1]: Reloading finished in 194 ms. Jul 2 08:28:21.503338 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:28:21.511075 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:28:21.512242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:28:21.512372 systemd[1]: kubelet.service: Consumed 2.005s CPU time, 114.8M memory peak, 0B memory swap peak. Jul 2 08:28:21.519511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:28:21.603601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:28:21.607928 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:28:21.654974 kubelet[2483]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:28:21.654974 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:28:21.654974 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:28:21.655437 kubelet[2483]: I0702 08:28:21.655006 2483 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:28:21.659201 kubelet[2483]: I0702 08:28:21.659146 2483 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:28:21.659201 kubelet[2483]: I0702 08:28:21.659190 2483 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:28:21.659706 kubelet[2483]: I0702 08:28:21.659350 2483 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:28:21.660894 kubelet[2483]: I0702 08:28:21.660858 2483 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:28:21.662880 kubelet[2483]: I0702 08:28:21.662841 2483 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:28:21.670704 kubelet[2483]: W0702 08:28:21.670672 2483 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 08:28:21.671418 kubelet[2483]: I0702 08:28:21.671390 2483 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:28:21.671584 kubelet[2483]: I0702 08:28:21.671570 2483 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:28:21.671757 kubelet[2483]: I0702 08:28:21.671739 2483 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:28:21.671835 kubelet[2483]: I0702 08:28:21.671772 2483 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:28:21.671835 kubelet[2483]: I0702 08:28:21.671780 2483 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:28:21.671835 kubelet[2483]: I0702 08:28:21.671817 2483 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:28:21.671914 kubelet[2483]: I0702 08:28:21.671903 2483 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:28:21.671936 kubelet[2483]: I0702 08:28:21.671923 2483 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:28:21.671959 kubelet[2483]: I0702 08:28:21.671944 2483 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:28:21.671959 kubelet[2483]: I0702 08:28:21.671954 2483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:28:21.672743 kubelet[2483]: I0702 08:28:21.672628 2483 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:28:21.673624 kubelet[2483]: I0702 08:28:21.673536 2483 server.go:1232] "Started kubelet" Jul 2 08:28:21.673621 sudo[2498]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:28:21.673861 sudo[2498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:28:21.673962 kubelet[2483]: I0702 08:28:21.673806 2483 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:28:21.676189 kubelet[2483]: I0702 08:28:21.674095 2483 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:28:21.676189 kubelet[2483]: I0702 08:28:21.674869 2483 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:28:21.676310 kubelet[2483]: I0702 08:28:21.676285 2483 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:28:21.676660 kubelet[2483]: I0702 08:28:21.676641 2483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:28:21.677277 kubelet[2483]: E0702 08:28:21.677215 2483 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:28:21.677277 kubelet[2483]: E0702 08:28:21.677246 2483 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:28:21.678559 kubelet[2483]: E0702 08:28:21.678524 2483 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:28:21.678559 kubelet[2483]: I0702 08:28:21.678553 2483 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:28:21.678648 kubelet[2483]: I0702 08:28:21.678634 2483 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:28:21.678777 kubelet[2483]: I0702 08:28:21.678756 2483 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:28:21.715014 kubelet[2483]: I0702 08:28:21.714988 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:28:21.716573 kubelet[2483]: I0702 08:28:21.716554 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:28:21.716686 kubelet[2483]: I0702 08:28:21.716675 2483 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:28:21.716759 kubelet[2483]: I0702 08:28:21.716749 2483 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:28:21.716919 kubelet[2483]: E0702 08:28:21.716904 2483 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:28:21.744540 kubelet[2483]: I0702 08:28:21.744506 2483 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:28:21.744540 kubelet[2483]: I0702 08:28:21.744534 2483 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:28:21.744698 kubelet[2483]: I0702 08:28:21.744553 2483 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:28:21.744763 kubelet[2483]: I0702 08:28:21.744721 2483 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:28:21.744854 kubelet[2483]: I0702 08:28:21.744836 2483 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:28:21.744854 kubelet[2483]: I0702 08:28:21.744853 2483 policy_none.go:49] "None policy: Start" Jul 2 08:28:21.745642 kubelet[2483]: I0702 08:28:21.745617 2483 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:28:21.745711 kubelet[2483]: I0702 08:28:21.745648 2483 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:28:21.745859 kubelet[2483]: I0702 08:28:21.745840 2483 state_mem.go:75] "Updated machine memory state" Jul 2 08:28:21.749438 kubelet[2483]: I0702 08:28:21.749415 2483 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:28:21.749977 kubelet[2483]: I0702 08:28:21.749609 2483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:28:21.782456 kubelet[2483]: I0702 08:28:21.782432 2483 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:28:21.788671 kubelet[2483]: I0702 08:28:21.788157 2483 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 08:28:21.788671 kubelet[2483]: I0702 08:28:21.788253 2483 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 08:28:21.818719 kubelet[2483]: I0702 08:28:21.818690 2483 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:28:21.818839 kubelet[2483]: I0702 08:28:21.818808 2483 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:28:21.818867 kubelet[2483]: I0702 08:28:21.818850 2483 topology_manager.go:215] "Topology Admit Handler" podUID="22d81e7f0edf4ed42deb73c4bcf3ae68" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:28:21.830005 kubelet[2483]: E0702 08:28:21.829958 2483 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:21.880879 kubelet[2483]: I0702 08:28:21.880073 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:28:21.880879 kubelet[2483]: I0702 08:28:21.880115 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:21.880879 kubelet[2483]: I0702 08:28:21.880138 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:21.880879 kubelet[2483]: I0702 08:28:21.880194 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:21.880879 kubelet[2483]: I0702 08:28:21.880239 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22d81e7f0edf4ed42deb73c4bcf3ae68-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"22d81e7f0edf4ed42deb73c4bcf3ae68\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:28:21.881195 kubelet[2483]: I0702 08:28:21.880263 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22d81e7f0edf4ed42deb73c4bcf3ae68-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"22d81e7f0edf4ed42deb73c4bcf3ae68\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:28:21.881195 kubelet[2483]: I0702 08:28:21.880295 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22d81e7f0edf4ed42deb73c4bcf3ae68-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"22d81e7f0edf4ed42deb73c4bcf3ae68\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:28:21.881195 kubelet[2483]: I0702 08:28:21.880315 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:21.881195 kubelet[2483]: I0702 08:28:21.880332 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:28:22.124829 sudo[2498]: pam_unix(sudo:session): session closed for user root Jul 2 08:28:22.128249 kubelet[2483]: E0702 08:28:22.128223 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:22.131349 kubelet[2483]: E0702 08:28:22.131137 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:22.131349 kubelet[2483]: E0702 08:28:22.131205 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:22.672645 kubelet[2483]: I0702 08:28:22.672585 2483 apiserver.go:52] "Watching apiserver" Jul 2 08:28:22.679203 kubelet[2483]: I0702 08:28:22.679156 2483 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:28:22.729139 kubelet[2483]: E0702 08:28:22.727680 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:22.729139 kubelet[2483]: E0702 08:28:22.728604 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:22.731569 kubelet[2483]: E0702 08:28:22.731530 2483 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 08:28:22.733525 kubelet[2483]: E0702 08:28:22.733503 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:22.754040 kubelet[2483]: I0702 08:28:22.753991 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.753942548 podCreationTimestamp="2024-07-02 08:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:28:22.747623927 +0000 UTC m=+1.135770531" watchObservedRunningTime="2024-07-02 08:28:22.753942548 +0000 UTC m=+1.142089152" Jul 2 08:28:22.754182 kubelet[2483]: I0702 08:28:22.754088 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.754069664 podCreationTimestamp="2024-07-02 08:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:28:22.753802588 +0000 UTC m=+1.141949192" watchObservedRunningTime="2024-07-02 08:28:22.754069664 +0000 UTC m=+1.142216308" Jul 2 08:28:23.730291 kubelet[2483]: E0702 08:28:23.728929 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:23.730291 kubelet[2483]: E0702 08:28:23.729122 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:23.820065 sudo[1612]: pam_unix(sudo:session): session closed for user root Jul 2 08:28:23.821444 sshd[1609]: pam_unix(sshd:session): session closed for user core Jul 2 08:28:23.824832 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:54880.service: Deactivated successfully. Jul 2 08:28:23.826386 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:28:23.826567 systemd[1]: session-7.scope: Consumed 8.785s CPU time, 134.1M memory peak, 0B memory swap peak. Jul 2 08:28:23.828473 systemd-logind[1412]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:28:23.829596 systemd-logind[1412]: Removed session 7. Jul 2 08:28:24.731327 kubelet[2483]: E0702 08:28:24.731299 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:29.684577 kubelet[2483]: E0702 08:28:29.684535 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:29.699177 kubelet[2483]: I0702 08:28:29.699115 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.699010692 podCreationTimestamp="2024-07-02 08:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:28:22.760229919 +0000 UTC m=+1.148376523" watchObservedRunningTime="2024-07-02 08:28:29.699010692 +0000 UTC m=+8.087157296" Jul 2 08:28:29.740178 kubelet[2483]: E0702 08:28:29.740124 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:32.131225 update_engine[1418]: I0702 08:28:32.130852 1418 update_attempter.cc:509] Updating boot flags... Jul 2 08:28:32.151953 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2567) Jul 2 08:28:33.175036 kubelet[2483]: E0702 08:28:33.174997 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:33.385449 kubelet[2483]: E0702 08:28:33.384372 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:37.284320 kubelet[2483]: I0702 08:28:37.284280 2483 topology_manager.go:215] "Topology Admit Handler" podUID="0cf4a10e-0618-4e58-acbb-819378a1bad3" podNamespace="kube-system" podName="kube-proxy-cbx6l" Jul 2 08:28:37.286418 kubelet[2483]: I0702 08:28:37.284966 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cf4a10e-0618-4e58-acbb-819378a1bad3-lib-modules\") pod \"kube-proxy-cbx6l\" (UID: \"0cf4a10e-0618-4e58-acbb-819378a1bad3\") " pod="kube-system/kube-proxy-cbx6l" Jul 2 08:28:37.286418 kubelet[2483]: I0702 08:28:37.285006 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0cf4a10e-0618-4e58-acbb-819378a1bad3-kube-proxy\") pod \"kube-proxy-cbx6l\" (UID: \"0cf4a10e-0618-4e58-acbb-819378a1bad3\") " pod="kube-system/kube-proxy-cbx6l" Jul 2 08:28:37.286418 kubelet[2483]: I0702 08:28:37.285026 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cf4a10e-0618-4e58-acbb-819378a1bad3-xtables-lock\") pod \"kube-proxy-cbx6l\" (UID: \"0cf4a10e-0618-4e58-acbb-819378a1bad3\") " pod="kube-system/kube-proxy-cbx6l" Jul 2 08:28:37.286418 kubelet[2483]: I0702 08:28:37.285046 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8wgj\" (UniqueName: \"kubernetes.io/projected/0cf4a10e-0618-4e58-acbb-819378a1bad3-kube-api-access-f8wgj\") pod \"kube-proxy-cbx6l\" (UID: \"0cf4a10e-0618-4e58-acbb-819378a1bad3\") " pod="kube-system/kube-proxy-cbx6l" Jul 2 08:28:37.291181 kubelet[2483]: I0702 08:28:37.291138 2483 topology_manager.go:215] "Topology Admit Handler" podUID="4468e7a9-c994-4c49-80f6-a439ff82a97a" podNamespace="kube-system" podName="cilium-f7lnl" Jul 2 08:28:37.300589 systemd[1]: Created slice kubepods-besteffort-pod0cf4a10e_0618_4e58_acbb_819378a1bad3.slice - libcontainer container kubepods-besteffort-pod0cf4a10e_0618_4e58_acbb_819378a1bad3.slice. Jul 2 08:28:37.314742 systemd[1]: Created slice kubepods-burstable-pod4468e7a9_c994_4c49_80f6_a439ff82a97a.slice - libcontainer container kubepods-burstable-pod4468e7a9_c994_4c49_80f6_a439ff82a97a.slice. Jul 2 08:28:37.351858 kubelet[2483]: I0702 08:28:37.351829 2483 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:28:37.352435 containerd[1428]: time="2024-07-02T08:28:37.352331945Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:28:37.352832 kubelet[2483]: I0702 08:28:37.352591 2483 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:28:37.358267 kubelet[2483]: I0702 08:28:37.357005 2483 topology_manager.go:215] "Topology Admit Handler" podUID="9d6eec17-7108-4d26-b2a4-07574d4ed9c0" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-qbf5q" Jul 2 08:28:37.364059 systemd[1]: Created slice kubepods-besteffort-pod9d6eec17_7108_4d26_b2a4_07574d4ed9c0.slice - libcontainer container kubepods-besteffort-pod9d6eec17_7108_4d26_b2a4_07574d4ed9c0.slice. Jul 2 08:28:37.486604 kubelet[2483]: I0702 08:28:37.486533 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-cgroup\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.486784 kubelet[2483]: I0702 08:28:37.486682 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-lib-modules\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.486948 kubelet[2483]: I0702 08:28:37.486838 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-xtables-lock\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.486948 kubelet[2483]: I0702 08:28:37.486867 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-host-proc-sys-net\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.486948 kubelet[2483]: I0702 08:28:37.486929 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-run\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487410 kubelet[2483]: I0702 08:28:37.486992 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww8pn\" (UniqueName: \"kubernetes.io/projected/4468e7a9-c994-4c49-80f6-a439ff82a97a-kube-api-access-ww8pn\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487410 kubelet[2483]: I0702 08:28:37.487031 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d6eec17-7108-4d26-b2a4-07574d4ed9c0-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-qbf5q\" (UID: \"9d6eec17-7108-4d26-b2a4-07574d4ed9c0\") " pod="kube-system/cilium-operator-6bc8ccdb58-qbf5q" Jul 2 08:28:37.487410 kubelet[2483]: I0702 08:28:37.487067 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-bpf-maps\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487410 kubelet[2483]: I0702 08:28:37.487101 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4468e7a9-c994-4c49-80f6-a439ff82a97a-clustermesh-secrets\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487410 kubelet[2483]: I0702 08:28:37.487121 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-config-path\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487520 kubelet[2483]: I0702 08:28:37.487145 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4468e7a9-c994-4c49-80f6-a439ff82a97a-hubble-tls\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487520 kubelet[2483]: I0702 08:28:37.487187 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-hostproc\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487520 kubelet[2483]: I0702 08:28:37.487215 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-host-proc-sys-kernel\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487520 kubelet[2483]: I0702 08:28:37.487235 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q254\" (UniqueName: \"kubernetes.io/projected/9d6eec17-7108-4d26-b2a4-07574d4ed9c0-kube-api-access-4q254\") pod \"cilium-operator-6bc8ccdb58-qbf5q\" (UID: \"9d6eec17-7108-4d26-b2a4-07574d4ed9c0\") " pod="kube-system/cilium-operator-6bc8ccdb58-qbf5q" Jul 2 08:28:37.487520 kubelet[2483]: I0702 08:28:37.487254 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cni-path\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.487618 kubelet[2483]: I0702 08:28:37.487273 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-etc-cni-netd\") pod \"cilium-f7lnl\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " pod="kube-system/cilium-f7lnl" Jul 2 08:28:37.610323 kubelet[2483]: E0702 08:28:37.609992 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:37.611270 containerd[1428]: time="2024-07-02T08:28:37.610884450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbx6l,Uid:0cf4a10e-0618-4e58-acbb-819378a1bad3,Namespace:kube-system,Attempt:0,}" Jul 2 08:28:37.617884 kubelet[2483]: E0702 08:28:37.617859 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:37.618282 containerd[1428]: time="2024-07-02T08:28:37.618241708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f7lnl,Uid:4468e7a9-c994-4c49-80f6-a439ff82a97a,Namespace:kube-system,Attempt:0,}" Jul 2 08:28:37.635984 containerd[1428]: time="2024-07-02T08:28:37.635775600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:28:37.635984 containerd[1428]: time="2024-07-02T08:28:37.635849130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:37.635984 containerd[1428]: time="2024-07-02T08:28:37.635867772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:28:37.635984 containerd[1428]: time="2024-07-02T08:28:37.635885134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:37.642380 containerd[1428]: time="2024-07-02T08:28:37.642288586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:28:37.642674 containerd[1428]: time="2024-07-02T08:28:37.642498094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:37.642674 containerd[1428]: time="2024-07-02T08:28:37.642556102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:28:37.642674 containerd[1428]: time="2024-07-02T08:28:37.642579665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:37.653323 systemd[1]: Started cri-containerd-b273e612de6e94d7a149eaa399a864106ab20ad71e961e8da1dbe799105b0ac4.scope - libcontainer container b273e612de6e94d7a149eaa399a864106ab20ad71e961e8da1dbe799105b0ac4. Jul 2 08:28:37.656535 systemd[1]: Started cri-containerd-3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b.scope - libcontainer container 3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b. Jul 2 08:28:37.667834 kubelet[2483]: E0702 08:28:37.667783 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:37.668289 containerd[1428]: time="2024-07-02T08:28:37.668209073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qbf5q,Uid:9d6eec17-7108-4d26-b2a4-07574d4ed9c0,Namespace:kube-system,Attempt:0,}" Jul 2 08:28:37.684989 containerd[1428]: time="2024-07-02T08:28:37.684941698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbx6l,Uid:0cf4a10e-0618-4e58-acbb-819378a1bad3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b273e612de6e94d7a149eaa399a864106ab20ad71e961e8da1dbe799105b0ac4\"" Jul 2 08:28:37.685930 kubelet[2483]: E0702 08:28:37.685807 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:37.687621 containerd[1428]: time="2024-07-02T08:28:37.687592571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f7lnl,Uid:4468e7a9-c994-4c49-80f6-a439ff82a97a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\"" Jul 2 08:28:37.688823 kubelet[2483]: E0702 08:28:37.688791 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:37.689444 containerd[1428]: time="2024-07-02T08:28:37.689362406Z" level=info msg="CreateContainer within sandbox \"b273e612de6e94d7a149eaa399a864106ab20ad71e961e8da1dbe799105b0ac4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:28:37.691034 containerd[1428]: time="2024-07-02T08:28:37.690996824Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:28:37.697533 containerd[1428]: time="2024-07-02T08:28:37.697423558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:28:37.697533 containerd[1428]: time="2024-07-02T08:28:37.697519451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:37.697714 containerd[1428]: time="2024-07-02T08:28:37.697558616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:28:37.697714 containerd[1428]: time="2024-07-02T08:28:37.697589140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:37.705064 containerd[1428]: time="2024-07-02T08:28:37.705022449Z" level=info msg="CreateContainer within sandbox \"b273e612de6e94d7a149eaa399a864106ab20ad71e961e8da1dbe799105b0ac4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e0e133a9bd467afbc5b61577cb68fa34ee7dcff7be5b58647d3a0157cba6c099\"" Jul 2 08:28:37.705995 containerd[1428]: time="2024-07-02T08:28:37.705967815Z" level=info msg="StartContainer for \"e0e133a9bd467afbc5b61577cb68fa34ee7dcff7be5b58647d3a0157cba6c099\"" Jul 2 08:28:37.720329 systemd[1]: Started cri-containerd-93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817.scope - libcontainer container 93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817. Jul 2 08:28:37.742320 systemd[1]: Started cri-containerd-e0e133a9bd467afbc5b61577cb68fa34ee7dcff7be5b58647d3a0157cba6c099.scope - libcontainer container e0e133a9bd467afbc5b61577cb68fa34ee7dcff7be5b58647d3a0157cba6c099. Jul 2 08:28:37.751045 containerd[1428]: time="2024-07-02T08:28:37.750997483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qbf5q,Uid:9d6eec17-7108-4d26-b2a4-07574d4ed9c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817\"" Jul 2 08:28:37.751733 kubelet[2483]: E0702 08:28:37.751709 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:37.773156 containerd[1428]: time="2024-07-02T08:28:37.773113504Z" level=info msg="StartContainer for \"e0e133a9bd467afbc5b61577cb68fa34ee7dcff7be5b58647d3a0157cba6c099\" returns successfully" Jul 2 08:28:38.768220 kubelet[2483]: E0702 08:28:38.768043 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:39.773207 kubelet[2483]: E0702 08:28:39.773148 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:40.133968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2071680411.mount: Deactivated successfully. Jul 2 08:28:41.758858 kubelet[2483]: I0702 08:28:41.758014 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cbx6l" podStartSLOduration=4.757976229 podCreationTimestamp="2024-07-02 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:28:38.778888403 +0000 UTC m=+17.167035007" watchObservedRunningTime="2024-07-02 08:28:41.757976229 +0000 UTC m=+20.146122833" Jul 2 08:28:43.161677 containerd[1428]: time="2024-07-02T08:28:43.161620460Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:43.162142 containerd[1428]: time="2024-07-02T08:28:43.162099709Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651458" Jul 2 08:28:43.163060 containerd[1428]: time="2024-07-02T08:28:43.163024964Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:43.164736 containerd[1428]: time="2024-07-02T08:28:43.164695735Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.473653066s" Jul 2 08:28:43.164790 containerd[1428]: time="2024-07-02T08:28:43.164747781Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 08:28:43.173824 containerd[1428]: time="2024-07-02T08:28:43.170144494Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:28:43.173824 containerd[1428]: time="2024-07-02T08:28:43.171077309Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:28:43.196757 containerd[1428]: time="2024-07-02T08:28:43.196700655Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\"" Jul 2 08:28:43.197229 containerd[1428]: time="2024-07-02T08:28:43.197207387Z" level=info msg="StartContainer for \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\"" Jul 2 08:28:43.228385 systemd[1]: Started cri-containerd-438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427.scope - libcontainer container 438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427. Jul 2 08:28:43.296914 systemd[1]: cri-containerd-438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427.scope: Deactivated successfully. Jul 2 08:28:43.311725 containerd[1428]: time="2024-07-02T08:28:43.311466616Z" level=info msg="StartContainer for \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\" returns successfully" Jul 2 08:28:43.367487 containerd[1428]: time="2024-07-02T08:28:43.367427791Z" level=info msg="shim disconnected" id=438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427 namespace=k8s.io Jul 2 08:28:43.367924 containerd[1428]: time="2024-07-02T08:28:43.367748904Z" level=warning msg="cleaning up after shim disconnected" id=438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427 namespace=k8s.io Jul 2 08:28:43.367924 containerd[1428]: time="2024-07-02T08:28:43.367767946Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:28:43.806344 kubelet[2483]: E0702 08:28:43.806304 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:43.809288 containerd[1428]: time="2024-07-02T08:28:43.809202905Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:28:43.824890 containerd[1428]: time="2024-07-02T08:28:43.824842428Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\"" Jul 2 08:28:43.825345 containerd[1428]: time="2024-07-02T08:28:43.825317877Z" level=info msg="StartContainer for \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\"" Jul 2 08:28:43.854345 systemd[1]: Started cri-containerd-9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f.scope - libcontainer container 9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f. Jul 2 08:28:43.877794 containerd[1428]: time="2024-07-02T08:28:43.877693164Z" level=info msg="StartContainer for \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\" returns successfully" Jul 2 08:28:43.907981 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:28:43.908230 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:28:43.908297 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:28:43.918607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:28:43.920474 systemd[1]: cri-containerd-9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f.scope: Deactivated successfully. Jul 2 08:28:43.960387 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:28:43.966734 containerd[1428]: time="2024-07-02T08:28:43.966607396Z" level=info msg="shim disconnected" id=9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f namespace=k8s.io Jul 2 08:28:43.966734 containerd[1428]: time="2024-07-02T08:28:43.966727448Z" level=warning msg="cleaning up after shim disconnected" id=9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f namespace=k8s.io Jul 2 08:28:43.966734 containerd[1428]: time="2024-07-02T08:28:43.966741290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:28:44.192288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427-rootfs.mount: Deactivated successfully. Jul 2 08:28:44.250621 containerd[1428]: time="2024-07-02T08:28:44.250577254Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:44.251079 containerd[1428]: time="2024-07-02T08:28:44.250992015Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138326" Jul 2 08:28:44.251772 containerd[1428]: time="2024-07-02T08:28:44.251715326Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:28:44.253199 containerd[1428]: time="2024-07-02T08:28:44.253131305Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.082936126s" Jul 2 08:28:44.253364 containerd[1428]: time="2024-07-02T08:28:44.253280360Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 08:28:44.256380 containerd[1428]: time="2024-07-02T08:28:44.256251772Z" level=info msg="CreateContainer within sandbox \"93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:28:44.265195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323080943.mount: Deactivated successfully. Jul 2 08:28:44.267609 containerd[1428]: time="2024-07-02T08:28:44.267564406Z" level=info msg="CreateContainer within sandbox \"93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\"" Jul 2 08:28:44.269459 containerd[1428]: time="2024-07-02T08:28:44.268013650Z" level=info msg="StartContainer for \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\"" Jul 2 08:28:44.294339 systemd[1]: Started cri-containerd-228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb.scope - libcontainer container 228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb. Jul 2 08:28:44.312689 containerd[1428]: time="2024-07-02T08:28:44.312645244Z" level=info msg="StartContainer for \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\" returns successfully" Jul 2 08:28:44.806573 kubelet[2483]: E0702 08:28:44.806537 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:44.811242 kubelet[2483]: E0702 08:28:44.811218 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:44.812633 containerd[1428]: time="2024-07-02T08:28:44.812596064Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:28:44.833252 kubelet[2483]: I0702 08:28:44.833209 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-qbf5q" podStartSLOduration=1.332057415 podCreationTimestamp="2024-07-02 08:28:37 +0000 UTC" firstStartedPulling="2024-07-02 08:28:37.752389948 +0000 UTC m=+16.140536552" lastFinishedPulling="2024-07-02 08:28:44.253489501 +0000 UTC m=+22.641636105" observedRunningTime="2024-07-02 08:28:44.833016034 +0000 UTC m=+23.221162638" watchObservedRunningTime="2024-07-02 08:28:44.833156968 +0000 UTC m=+23.221303572" Jul 2 08:28:44.836687 containerd[1428]: time="2024-07-02T08:28:44.836621349Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\"" Jul 2 08:28:44.838824 containerd[1428]: time="2024-07-02T08:28:44.837306297Z" level=info msg="StartContainer for \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\"" Jul 2 08:28:44.873341 systemd[1]: Started cri-containerd-b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66.scope - libcontainer container b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66. Jul 2 08:28:44.939675 systemd[1]: cri-containerd-b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66.scope: Deactivated successfully. Jul 2 08:28:44.956970 containerd[1428]: time="2024-07-02T08:28:44.956907671Z" level=info msg="StartContainer for \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\" returns successfully" Jul 2 08:28:45.004954 containerd[1428]: time="2024-07-02T08:28:45.004882340Z" level=info msg="shim disconnected" id=b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66 namespace=k8s.io Jul 2 08:28:45.005250 containerd[1428]: time="2024-07-02T08:28:45.005139605Z" level=warning msg="cleaning up after shim disconnected" id=b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66 namespace=k8s.io Jul 2 08:28:45.005250 containerd[1428]: time="2024-07-02T08:28:45.005156726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:28:45.815405 kubelet[2483]: E0702 08:28:45.815365 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:45.823767 kubelet[2483]: E0702 08:28:45.823537 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:45.824980 containerd[1428]: time="2024-07-02T08:28:45.824941934Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:28:45.839180 containerd[1428]: time="2024-07-02T08:28:45.839127117Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\"" Jul 2 08:28:45.840521 containerd[1428]: time="2024-07-02T08:28:45.840495567Z" level=info msg="StartContainer for \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\"" Jul 2 08:28:45.870317 systemd[1]: Started cri-containerd-4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72.scope - libcontainer container 4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72. Jul 2 08:28:45.887724 systemd[1]: cri-containerd-4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72.scope: Deactivated successfully. Jul 2 08:28:45.889612 containerd[1428]: time="2024-07-02T08:28:45.889577333Z" level=info msg="StartContainer for \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\" returns successfully" Jul 2 08:28:45.905892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72-rootfs.mount: Deactivated successfully. Jul 2 08:28:45.909970 containerd[1428]: time="2024-07-02T08:28:45.909919579Z" level=info msg="shim disconnected" id=4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72 namespace=k8s.io Jul 2 08:28:45.909970 containerd[1428]: time="2024-07-02T08:28:45.909967504Z" level=warning msg="cleaning up after shim disconnected" id=4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72 namespace=k8s.io Jul 2 08:28:45.909970 containerd[1428]: time="2024-07-02T08:28:45.909976704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:28:46.819085 kubelet[2483]: E0702 08:28:46.818625 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:46.825783 containerd[1428]: time="2024-07-02T08:28:46.825717436Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:28:46.843565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210621805.mount: Deactivated successfully. Jul 2 08:28:46.843731 containerd[1428]: time="2024-07-02T08:28:46.843628308Z" level=info msg="CreateContainer within sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\"" Jul 2 08:28:46.844331 containerd[1428]: time="2024-07-02T08:28:46.844305690Z" level=info msg="StartContainer for \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\"" Jul 2 08:28:46.863330 systemd[1]: run-containerd-runc-k8s.io-85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c-runc.6kWzt9.mount: Deactivated successfully. Jul 2 08:28:46.873334 systemd[1]: Started cri-containerd-85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c.scope - libcontainer container 85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c. Jul 2 08:28:46.903141 containerd[1428]: time="2024-07-02T08:28:46.902995678Z" level=info msg="StartContainer for \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\" returns successfully" Jul 2 08:28:47.025216 kubelet[2483]: I0702 08:28:47.023486 2483 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 08:28:47.042882 kubelet[2483]: I0702 08:28:47.042841 2483 topology_manager.go:215] "Topology Admit Handler" podUID="ffd4c529-4d21-4454-be03-064098e4c2bb" podNamespace="kube-system" podName="coredns-5dd5756b68-62848" Jul 2 08:28:47.044442 kubelet[2483]: I0702 08:28:47.044394 2483 topology_manager.go:215] "Topology Admit Handler" podUID="c1b56c40-db59-43d8-ab04-7408ff2a77f6" podNamespace="kube-system" podName="coredns-5dd5756b68-tpn45" Jul 2 08:28:47.058412 systemd[1]: Created slice kubepods-burstable-podc1b56c40_db59_43d8_ab04_7408ff2a77f6.slice - libcontainer container kubepods-burstable-podc1b56c40_db59_43d8_ab04_7408ff2a77f6.slice. Jul 2 08:28:47.067617 systemd[1]: Created slice kubepods-burstable-podffd4c529_4d21_4454_be03_064098e4c2bb.slice - libcontainer container kubepods-burstable-podffd4c529_4d21_4454_be03_064098e4c2bb.slice. Jul 2 08:28:47.151335 kubelet[2483]: I0702 08:28:47.151053 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w2vf\" (UniqueName: \"kubernetes.io/projected/c1b56c40-db59-43d8-ab04-7408ff2a77f6-kube-api-access-4w2vf\") pod \"coredns-5dd5756b68-tpn45\" (UID: \"c1b56c40-db59-43d8-ab04-7408ff2a77f6\") " pod="kube-system/coredns-5dd5756b68-tpn45" Jul 2 08:28:47.151335 kubelet[2483]: I0702 08:28:47.151104 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd4c529-4d21-4454-be03-064098e4c2bb-config-volume\") pod \"coredns-5dd5756b68-62848\" (UID: \"ffd4c529-4d21-4454-be03-064098e4c2bb\") " pod="kube-system/coredns-5dd5756b68-62848" Jul 2 08:28:47.151335 kubelet[2483]: I0702 08:28:47.151162 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwsx\" (UniqueName: \"kubernetes.io/projected/ffd4c529-4d21-4454-be03-064098e4c2bb-kube-api-access-mwwsx\") pod \"coredns-5dd5756b68-62848\" (UID: \"ffd4c529-4d21-4454-be03-064098e4c2bb\") " pod="kube-system/coredns-5dd5756b68-62848" Jul 2 08:28:47.151335 kubelet[2483]: I0702 08:28:47.151254 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1b56c40-db59-43d8-ab04-7408ff2a77f6-config-volume\") pod \"coredns-5dd5756b68-tpn45\" (UID: \"c1b56c40-db59-43d8-ab04-7408ff2a77f6\") " pod="kube-system/coredns-5dd5756b68-tpn45" Jul 2 08:28:47.364314 kubelet[2483]: E0702 08:28:47.364277 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:47.366427 containerd[1428]: time="2024-07-02T08:28:47.366378890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tpn45,Uid:c1b56c40-db59-43d8-ab04-7408ff2a77f6,Namespace:kube-system,Attempt:0,}" Jul 2 08:28:47.370964 kubelet[2483]: E0702 08:28:47.370690 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:47.373011 containerd[1428]: time="2024-07-02T08:28:47.371300762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-62848,Uid:ffd4c529-4d21-4454-be03-064098e4c2bb,Namespace:kube-system,Attempt:0,}" Jul 2 08:28:47.823358 kubelet[2483]: E0702 08:28:47.823314 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:47.838389 kubelet[2483]: I0702 08:28:47.838348 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-f7lnl" podStartSLOduration=5.363718388 podCreationTimestamp="2024-07-02 08:28:37 +0000 UTC" firstStartedPulling="2024-07-02 08:28:37.690478275 +0000 UTC m=+16.078624879" lastFinishedPulling="2024-07-02 08:28:43.165064613 +0000 UTC m=+21.553211217" observedRunningTime="2024-07-02 08:28:47.836744749 +0000 UTC m=+26.224891353" watchObservedRunningTime="2024-07-02 08:28:47.838304726 +0000 UTC m=+26.226451330" Jul 2 08:28:48.825399 kubelet[2483]: E0702 08:28:48.825356 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:49.124407 systemd-networkd[1375]: cilium_host: Link UP Jul 2 08:28:49.125885 systemd-networkd[1375]: cilium_net: Link UP Jul 2 08:28:49.125891 systemd-networkd[1375]: cilium_net: Gained carrier Jul 2 08:28:49.126120 systemd-networkd[1375]: cilium_host: Gained carrier Jul 2 08:28:49.126306 systemd-networkd[1375]: cilium_net: Gained IPv6LL Jul 2 08:28:49.126507 systemd-networkd[1375]: cilium_host: Gained IPv6LL Jul 2 08:28:49.213907 systemd-networkd[1375]: cilium_vxlan: Link UP Jul 2 08:28:49.213919 systemd-networkd[1375]: cilium_vxlan: Gained carrier Jul 2 08:28:49.510198 kernel: NET: Registered PF_ALG protocol family Jul 2 08:28:49.827100 kubelet[2483]: E0702 08:28:49.827065 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:50.081423 systemd-networkd[1375]: lxc_health: Link UP Jul 2 08:28:50.089983 systemd-networkd[1375]: lxc_health: Gained carrier Jul 2 08:28:50.524661 systemd-networkd[1375]: lxc5808c688ae0f: Link UP Jul 2 08:28:50.531151 systemd-networkd[1375]: lxc92b808c124d6: Link UP Jul 2 08:28:50.540198 kernel: eth0: renamed from tmp8cd17 Jul 2 08:28:50.547193 kernel: eth0: renamed from tmpe677f Jul 2 08:28:50.548759 systemd-networkd[1375]: lxc5808c688ae0f: Gained carrier Jul 2 08:28:50.550817 systemd-networkd[1375]: lxc92b808c124d6: Gained carrier Jul 2 08:28:51.201302 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Jul 2 08:28:51.585292 systemd-networkd[1375]: lxc_health: Gained IPv6LL Jul 2 08:28:51.623179 kubelet[2483]: E0702 08:28:51.620458 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:52.097314 systemd-networkd[1375]: lxc5808c688ae0f: Gained IPv6LL Jul 2 08:28:52.290082 systemd-networkd[1375]: lxc92b808c124d6: Gained IPv6LL Jul 2 08:28:53.281422 kubelet[2483]: I0702 08:28:53.281374 2483 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:28:53.282190 kubelet[2483]: E0702 08:28:53.282151 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:53.836414 kubelet[2483]: E0702 08:28:53.836368 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:54.188488 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:60256.service - OpenSSH per-connection server daemon (10.0.0.1:60256). Jul 2 08:28:54.217697 containerd[1428]: time="2024-07-02T08:28:54.217547062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:28:54.217697 containerd[1428]: time="2024-07-02T08:28:54.217615187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:54.218224 containerd[1428]: time="2024-07-02T08:28:54.217725754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:28:54.218224 containerd[1428]: time="2024-07-02T08:28:54.218061578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:54.230642 sshd[3716]: Accepted publickey for core from 10.0.0.1 port 60256 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:28:54.230089 sshd[3716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:28:54.240485 systemd[1]: Started cri-containerd-e677f7baf35ddb8334118817f251b5de6638e6d7de3b2899f370c203468118d1.scope - libcontainer container e677f7baf35ddb8334118817f251b5de6638e6d7de3b2899f370c203468118d1. Jul 2 08:28:54.244799 systemd-logind[1412]: New session 8 of user core. Jul 2 08:28:54.245190 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:28:54.255272 containerd[1428]: time="2024-07-02T08:28:54.254958789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:28:54.255272 containerd[1428]: time="2024-07-02T08:28:54.255025194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:54.255272 containerd[1428]: time="2024-07-02T08:28:54.255057316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:28:54.255272 containerd[1428]: time="2024-07-02T08:28:54.255072997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:28:54.256330 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:28:54.274331 systemd[1]: Started cri-containerd-8cd173a9bfba639e8e985863bf6c9faaef0d446cb86d5c2da96a23ee6a40fb9b.scope - libcontainer container 8cd173a9bfba639e8e985863bf6c9faaef0d446cb86d5c2da96a23ee6a40fb9b. Jul 2 08:28:54.281063 containerd[1428]: time="2024-07-02T08:28:54.281027806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-62848,Uid:ffd4c529-4d21-4454-be03-064098e4c2bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e677f7baf35ddb8334118817f251b5de6638e6d7de3b2899f370c203468118d1\"" Jul 2 08:28:54.281856 kubelet[2483]: E0702 08:28:54.281838 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:54.283759 containerd[1428]: time="2024-07-02T08:28:54.283722713Z" level=info msg="CreateContainer within sandbox \"e677f7baf35ddb8334118817f251b5de6638e6d7de3b2899f370c203468118d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:28:54.287878 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:28:54.309597 containerd[1428]: time="2024-07-02T08:28:54.309558514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tpn45,Uid:c1b56c40-db59-43d8-ab04-7408ff2a77f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cd173a9bfba639e8e985863bf6c9faaef0d446cb86d5c2da96a23ee6a40fb9b\"" Jul 2 08:28:54.310729 kubelet[2483]: E0702 08:28:54.310676 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:54.313231 containerd[1428]: time="2024-07-02T08:28:54.313080159Z" level=info msg="CreateContainer within sandbox \"8cd173a9bfba639e8e985863bf6c9faaef0d446cb86d5c2da96a23ee6a40fb9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:28:54.405921 sshd[3716]: pam_unix(sshd:session): session closed for user core Jul 2 08:28:54.408615 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:60256.service: Deactivated successfully. Jul 2 08:28:54.410249 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:28:54.411749 systemd-logind[1412]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:28:54.412968 systemd-logind[1412]: Removed session 8. Jul 2 08:28:54.599579 containerd[1428]: time="2024-07-02T08:28:54.599461636Z" level=info msg="CreateContainer within sandbox \"e677f7baf35ddb8334118817f251b5de6638e6d7de3b2899f370c203468118d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bd1c44f43d8aa232599cf72a14168064b54f54cf18ff8f3a1eee8ced11e93a2\"" Jul 2 08:28:54.600340 containerd[1428]: time="2024-07-02T08:28:54.600086879Z" level=info msg="StartContainer for \"9bd1c44f43d8aa232599cf72a14168064b54f54cf18ff8f3a1eee8ced11e93a2\"" Jul 2 08:28:54.604896 containerd[1428]: time="2024-07-02T08:28:54.604824090Z" level=info msg="CreateContainer within sandbox \"8cd173a9bfba639e8e985863bf6c9faaef0d446cb86d5c2da96a23ee6a40fb9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c910e1e2d09cfafada3eccf07e36b4fe1a9fa1d9a7e3328695a253831612bb4f\"" Jul 2 08:28:54.605321 containerd[1428]: time="2024-07-02T08:28:54.605269841Z" level=info msg="StartContainer for \"c910e1e2d09cfafada3eccf07e36b4fe1a9fa1d9a7e3328695a253831612bb4f\"" Jul 2 08:28:54.632349 systemd[1]: Started cri-containerd-9bd1c44f43d8aa232599cf72a14168064b54f54cf18ff8f3a1eee8ced11e93a2.scope - libcontainer container 9bd1c44f43d8aa232599cf72a14168064b54f54cf18ff8f3a1eee8ced11e93a2. Jul 2 08:28:54.635036 systemd[1]: Started cri-containerd-c910e1e2d09cfafada3eccf07e36b4fe1a9fa1d9a7e3328695a253831612bb4f.scope - libcontainer container c910e1e2d09cfafada3eccf07e36b4fe1a9fa1d9a7e3328695a253831612bb4f. Jul 2 08:28:54.657951 containerd[1428]: time="2024-07-02T08:28:54.657895308Z" level=info msg="StartContainer for \"9bd1c44f43d8aa232599cf72a14168064b54f54cf18ff8f3a1eee8ced11e93a2\" returns successfully" Jul 2 08:28:54.664870 containerd[1428]: time="2024-07-02T08:28:54.664734664Z" level=info msg="StartContainer for \"c910e1e2d09cfafada3eccf07e36b4fe1a9fa1d9a7e3328695a253831612bb4f\" returns successfully" Jul 2 08:28:54.839721 kubelet[2483]: E0702 08:28:54.839677 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:54.846051 kubelet[2483]: E0702 08:28:54.845670 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:54.853726 kubelet[2483]: I0702 08:28:54.853470 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-62848" podStartSLOduration=17.853432654 podCreationTimestamp="2024-07-02 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:28:54.852776688 +0000 UTC m=+33.240923292" watchObservedRunningTime="2024-07-02 08:28:54.853432654 +0000 UTC m=+33.241579218" Jul 2 08:28:55.845065 kubelet[2483]: E0702 08:28:55.845023 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:55.845430 kubelet[2483]: E0702 08:28:55.845204 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:55.855911 kubelet[2483]: I0702 08:28:55.855716 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-tpn45" podStartSLOduration=18.855676601 podCreationTimestamp="2024-07-02 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:28:54.863480474 +0000 UTC m=+33.251627118" watchObservedRunningTime="2024-07-02 08:28:55.855676601 +0000 UTC m=+34.243823205" Jul 2 08:28:56.846941 kubelet[2483]: E0702 08:28:56.846916 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:56.847514 kubelet[2483]: E0702 08:28:56.846954 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:28:59.430422 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:60272.service - OpenSSH per-connection server daemon (10.0.0.1:60272). Jul 2 08:28:59.474361 sshd[3903]: Accepted publickey for core from 10.0.0.1 port 60272 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:28:59.475007 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:28:59.478940 systemd-logind[1412]: New session 9 of user core. Jul 2 08:28:59.489335 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:28:59.648061 sshd[3903]: pam_unix(sshd:session): session closed for user core Jul 2 08:28:59.650750 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:60272.service: Deactivated successfully. Jul 2 08:28:59.654055 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:28:59.655645 systemd-logind[1412]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:28:59.656826 systemd-logind[1412]: Removed session 9. Jul 2 08:29:04.661739 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:34476.service - OpenSSH per-connection server daemon (10.0.0.1:34476). Jul 2 08:29:04.695931 sshd[3922]: Accepted publickey for core from 10.0.0.1 port 34476 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:04.697133 sshd[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:04.700456 systemd-logind[1412]: New session 10 of user core. Jul 2 08:29:04.710313 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:29:04.833374 sshd[3922]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:04.841747 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:34476.service: Deactivated successfully. Jul 2 08:29:04.844489 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:29:04.845929 systemd-logind[1412]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:29:04.852471 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:34478.service - OpenSSH per-connection server daemon (10.0.0.1:34478). Jul 2 08:29:04.854232 systemd-logind[1412]: Removed session 10. Jul 2 08:29:04.883484 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 34478 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:04.884562 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:04.888266 systemd-logind[1412]: New session 11 of user core. Jul 2 08:29:04.895297 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:29:05.580936 sshd[3938]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:05.587717 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:34478.service: Deactivated successfully. Jul 2 08:29:05.590029 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:29:05.592762 systemd-logind[1412]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:29:05.606496 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:34492.service - OpenSSH per-connection server daemon (10.0.0.1:34492). Jul 2 08:29:05.608142 systemd-logind[1412]: Removed session 11. Jul 2 08:29:05.643934 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 34492 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:05.645503 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:05.649287 systemd-logind[1412]: New session 12 of user core. Jul 2 08:29:05.660316 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:29:05.768470 sshd[3951]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:05.771898 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:34492.service: Deactivated successfully. Jul 2 08:29:05.773937 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:29:05.775791 systemd-logind[1412]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:29:05.776771 systemd-logind[1412]: Removed session 12. Jul 2 08:29:10.780827 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:44714.service - OpenSSH per-connection server daemon (10.0.0.1:44714). Jul 2 08:29:10.813081 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 44714 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:10.814258 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:10.818352 systemd-logind[1412]: New session 13 of user core. Jul 2 08:29:10.826358 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:29:10.937464 sshd[3968]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:10.940553 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:44714.service: Deactivated successfully. Jul 2 08:29:10.942836 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:29:10.943983 systemd-logind[1412]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:29:10.944965 systemd-logind[1412]: Removed session 13. Jul 2 08:29:15.947704 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:44722.service - OpenSSH per-connection server daemon (10.0.0.1:44722). Jul 2 08:29:15.979878 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 44722 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:15.981129 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:15.984394 systemd-logind[1412]: New session 14 of user core. Jul 2 08:29:15.994384 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 08:29:16.102253 sshd[3982]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:16.109625 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:44722.service: Deactivated successfully. Jul 2 08:29:16.112576 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:29:16.115397 systemd-logind[1412]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:29:16.130564 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:44732.service - OpenSSH per-connection server daemon (10.0.0.1:44732). Jul 2 08:29:16.131900 systemd-logind[1412]: Removed session 14. Jul 2 08:29:16.158323 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 44732 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:16.159445 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:16.162923 systemd-logind[1412]: New session 15 of user core. Jul 2 08:29:16.172313 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 08:29:16.359059 sshd[3996]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:16.370571 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:44732.service: Deactivated successfully. Jul 2 08:29:16.372339 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:29:16.373719 systemd-logind[1412]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:29:16.386106 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:44736.service - OpenSSH per-connection server daemon (10.0.0.1:44736). Jul 2 08:29:16.387424 systemd-logind[1412]: Removed session 15. Jul 2 08:29:16.418047 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 44736 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:16.419144 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:16.423147 systemd-logind[1412]: New session 16 of user core. Jul 2 08:29:16.434311 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 08:29:17.166802 sshd[4010]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:17.173845 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:44736.service: Deactivated successfully. Jul 2 08:29:17.177512 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:29:17.180255 systemd-logind[1412]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:29:17.193471 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:44746.service - OpenSSH per-connection server daemon (10.0.0.1:44746). Jul 2 08:29:17.194617 systemd-logind[1412]: Removed session 16. Jul 2 08:29:17.222425 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 44746 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:17.223651 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:17.227694 systemd-logind[1412]: New session 17 of user core. Jul 2 08:29:17.237303 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 08:29:17.531345 sshd[4031]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:17.546977 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:44746.service: Deactivated successfully. Jul 2 08:29:17.549473 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:29:17.551338 systemd-logind[1412]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:29:17.558633 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:44754.service - OpenSSH per-connection server daemon (10.0.0.1:44754). Jul 2 08:29:17.559521 systemd-logind[1412]: Removed session 17. Jul 2 08:29:17.588585 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 44754 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:17.589819 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:17.593587 systemd-logind[1412]: New session 18 of user core. Jul 2 08:29:17.603332 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 08:29:17.721162 sshd[4043]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:17.724766 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:44754.service: Deactivated successfully. Jul 2 08:29:17.727689 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:29:17.728877 systemd-logind[1412]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:29:17.729741 systemd-logind[1412]: Removed session 18. Jul 2 08:29:22.731812 systemd[1]: Started sshd@18-10.0.0.104:22-10.0.0.1:58420.service - OpenSSH per-connection server daemon (10.0.0.1:58420). Jul 2 08:29:22.763863 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 58420 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:22.764984 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:22.768171 systemd-logind[1412]: New session 19 of user core. Jul 2 08:29:22.783331 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 08:29:22.890015 sshd[4063]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:22.893059 systemd[1]: sshd@18-10.0.0.104:22-10.0.0.1:58420.service: Deactivated successfully. Jul 2 08:29:22.895464 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:29:22.896550 systemd-logind[1412]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:29:22.897741 systemd-logind[1412]: Removed session 19. Jul 2 08:29:27.903749 systemd[1]: Started sshd@19-10.0.0.104:22-10.0.0.1:58422.service - OpenSSH per-connection server daemon (10.0.0.1:58422). Jul 2 08:29:27.935649 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 58422 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:27.936789 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:27.940226 systemd-logind[1412]: New session 20 of user core. Jul 2 08:29:27.953398 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 08:29:28.059402 sshd[4077]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:28.062805 systemd[1]: sshd@19-10.0.0.104:22-10.0.0.1:58422.service: Deactivated successfully. Jul 2 08:29:28.064507 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:29:28.065110 systemd-logind[1412]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:29:28.066036 systemd-logind[1412]: Removed session 20. Jul 2 08:29:33.085918 systemd[1]: Started sshd@20-10.0.0.104:22-10.0.0.1:54940.service - OpenSSH per-connection server daemon (10.0.0.1:54940). Jul 2 08:29:33.117036 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 54940 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:33.118190 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:33.121662 systemd-logind[1412]: New session 21 of user core. Jul 2 08:29:33.132315 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 08:29:33.247340 sshd[4091]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:33.250521 systemd[1]: sshd@20-10.0.0.104:22-10.0.0.1:54940.service: Deactivated successfully. Jul 2 08:29:33.252708 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:29:33.257702 systemd-logind[1412]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:29:33.258522 systemd-logind[1412]: Removed session 21. Jul 2 08:29:37.718214 kubelet[2483]: E0702 08:29:37.718181 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:38.258016 systemd[1]: Started sshd@21-10.0.0.104:22-10.0.0.1:54942.service - OpenSSH per-connection server daemon (10.0.0.1:54942). Jul 2 08:29:38.293672 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 54942 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:38.294088 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:38.299221 systemd-logind[1412]: New session 22 of user core. Jul 2 08:29:38.308349 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 08:29:38.414382 sshd[4108]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:38.421535 systemd[1]: sshd@21-10.0.0.104:22-10.0.0.1:54942.service: Deactivated successfully. Jul 2 08:29:38.422940 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:29:38.425785 systemd-logind[1412]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:29:38.426891 systemd[1]: Started sshd@22-10.0.0.104:22-10.0.0.1:54948.service - OpenSSH per-connection server daemon (10.0.0.1:54948). Jul 2 08:29:38.427996 systemd-logind[1412]: Removed session 22. Jul 2 08:29:38.467197 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 54948 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:38.468337 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:38.473108 systemd-logind[1412]: New session 23 of user core. Jul 2 08:29:38.482315 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 08:29:40.387195 containerd[1428]: time="2024-07-02T08:29:40.387134217Z" level=info msg="StopContainer for \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\" with timeout 30 (s)" Jul 2 08:29:40.388571 containerd[1428]: time="2024-07-02T08:29:40.388021170Z" level=info msg="Stop container \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\" with signal terminated" Jul 2 08:29:40.399345 systemd[1]: cri-containerd-228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb.scope: Deactivated successfully. Jul 2 08:29:40.415517 containerd[1428]: time="2024-07-02T08:29:40.415467497Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:29:40.418405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb-rootfs.mount: Deactivated successfully. Jul 2 08:29:40.423423 containerd[1428]: time="2024-07-02T08:29:40.423371390Z" level=info msg="StopContainer for \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\" with timeout 2 (s)" Jul 2 08:29:40.423813 containerd[1428]: time="2024-07-02T08:29:40.423767427Z" level=info msg="Stop container \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\" with signal terminated" Jul 2 08:29:40.425507 containerd[1428]: time="2024-07-02T08:29:40.425328614Z" level=info msg="shim disconnected" id=228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb namespace=k8s.io Jul 2 08:29:40.425507 containerd[1428]: time="2024-07-02T08:29:40.425375293Z" level=warning msg="cleaning up after shim disconnected" id=228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb namespace=k8s.io Jul 2 08:29:40.425507 containerd[1428]: time="2024-07-02T08:29:40.425383733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:29:40.430316 systemd-networkd[1375]: lxc_health: Link DOWN Jul 2 08:29:40.430323 systemd-networkd[1375]: lxc_health: Lost carrier Jul 2 08:29:40.445962 systemd[1]: cri-containerd-85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c.scope: Deactivated successfully. Jul 2 08:29:40.446336 systemd[1]: cri-containerd-85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c.scope: Consumed 6.545s CPU time. Jul 2 08:29:40.447303 containerd[1428]: time="2024-07-02T08:29:40.447266868Z" level=info msg="StopContainer for \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\" returns successfully" Jul 2 08:29:40.450295 containerd[1428]: time="2024-07-02T08:29:40.450251363Z" level=info msg="StopPodSandbox for \"93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817\"" Jul 2 08:29:40.450543 containerd[1428]: time="2024-07-02T08:29:40.450457921Z" level=info msg="Container to stop \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:29:40.453797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817-shm.mount: Deactivated successfully. Jul 2 08:29:40.465506 systemd[1]: cri-containerd-93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817.scope: Deactivated successfully. Jul 2 08:29:40.467628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c-rootfs.mount: Deactivated successfully. Jul 2 08:29:40.476771 containerd[1428]: time="2024-07-02T08:29:40.476559180Z" level=info msg="shim disconnected" id=85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c namespace=k8s.io Jul 2 08:29:40.476771 containerd[1428]: time="2024-07-02T08:29:40.476625419Z" level=warning msg="cleaning up after shim disconnected" id=85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c namespace=k8s.io Jul 2 08:29:40.476771 containerd[1428]: time="2024-07-02T08:29:40.476634019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:29:40.488033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817-rootfs.mount: Deactivated successfully. Jul 2 08:29:40.489707 containerd[1428]: time="2024-07-02T08:29:40.489636149Z" level=info msg="shim disconnected" id=93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817 namespace=k8s.io Jul 2 08:29:40.489707 containerd[1428]: time="2024-07-02T08:29:40.489698909Z" level=warning msg="cleaning up after shim disconnected" id=93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817 namespace=k8s.io Jul 2 08:29:40.489707 containerd[1428]: time="2024-07-02T08:29:40.489708829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:29:40.497465 containerd[1428]: time="2024-07-02T08:29:40.497383084Z" level=info msg="StopContainer for \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\" returns successfully" Jul 2 08:29:40.501674 containerd[1428]: time="2024-07-02T08:29:40.501635048Z" level=info msg="TearDown network for sandbox \"93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817\" successfully" Jul 2 08:29:40.501674 containerd[1428]: time="2024-07-02T08:29:40.501669687Z" level=info msg="StopPodSandbox for \"93ecbb2b6f165b2130f2121c2fc16203fb077aeb78975a3e9dc9be90044b1817\" returns successfully" Jul 2 08:29:40.506543 containerd[1428]: time="2024-07-02T08:29:40.506502326Z" level=info msg="StopPodSandbox for \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\"" Jul 2 08:29:40.506597 containerd[1428]: time="2024-07-02T08:29:40.506555126Z" level=info msg="Container to stop \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:29:40.506625 containerd[1428]: time="2024-07-02T08:29:40.506594526Z" level=info msg="Container to stop \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:29:40.506625 containerd[1428]: time="2024-07-02T08:29:40.506605166Z" level=info msg="Container to stop \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:29:40.506625 containerd[1428]: time="2024-07-02T08:29:40.506614965Z" level=info msg="Container to stop \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:29:40.506683 containerd[1428]: time="2024-07-02T08:29:40.506624405Z" level=info msg="Container to stop \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:29:40.515790 systemd[1]: cri-containerd-3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b.scope: Deactivated successfully. Jul 2 08:29:40.536910 containerd[1428]: time="2024-07-02T08:29:40.536567592Z" level=info msg="shim disconnected" id=3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b namespace=k8s.io Jul 2 08:29:40.536910 containerd[1428]: time="2024-07-02T08:29:40.536638871Z" level=warning msg="cleaning up after shim disconnected" id=3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b namespace=k8s.io Jul 2 08:29:40.536910 containerd[1428]: time="2024-07-02T08:29:40.536648191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:29:40.555396 containerd[1428]: time="2024-07-02T08:29:40.555353033Z" level=info msg="TearDown network for sandbox \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" successfully" Jul 2 08:29:40.555396 containerd[1428]: time="2024-07-02T08:29:40.555388233Z" level=info msg="StopPodSandbox for \"3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b\" returns successfully" Jul 2 08:29:40.642497 kubelet[2483]: I0702 08:29:40.641037 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d6eec17-7108-4d26-b2a4-07574d4ed9c0-cilium-config-path\") pod \"9d6eec17-7108-4d26-b2a4-07574d4ed9c0\" (UID: \"9d6eec17-7108-4d26-b2a4-07574d4ed9c0\") " Jul 2 08:29:40.642497 kubelet[2483]: I0702 08:29:40.641079 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-hostproc\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642497 kubelet[2483]: I0702 08:29:40.641103 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4468e7a9-c994-4c49-80f6-a439ff82a97a-hubble-tls\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642497 kubelet[2483]: I0702 08:29:40.641128 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q254\" (UniqueName: \"kubernetes.io/projected/9d6eec17-7108-4d26-b2a4-07574d4ed9c0-kube-api-access-4q254\") pod \"9d6eec17-7108-4d26-b2a4-07574d4ed9c0\" (UID: \"9d6eec17-7108-4d26-b2a4-07574d4ed9c0\") " Jul 2 08:29:40.642497 kubelet[2483]: I0702 08:29:40.641150 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-xtables-lock\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642497 kubelet[2483]: I0702 08:29:40.641197 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4468e7a9-c994-4c49-80f6-a439ff82a97a-clustermesh-secrets\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642948 kubelet[2483]: I0702 08:29:40.641215 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-bpf-maps\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642948 kubelet[2483]: I0702 08:29:40.641256 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-lib-modules\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642948 kubelet[2483]: I0702 08:29:40.641279 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-config-path\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642948 kubelet[2483]: I0702 08:29:40.641297 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cni-path\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642948 kubelet[2483]: I0702 08:29:40.641313 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-run\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.642948 kubelet[2483]: I0702 08:29:40.641331 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-host-proc-sys-kernel\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.643077 kubelet[2483]: I0702 08:29:40.641373 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.643077 kubelet[2483]: I0702 08:29:40.641408 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.643077 kubelet[2483]: I0702 08:29:40.641423 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.643077 kubelet[2483]: I0702 08:29:40.642995 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d6eec17-7108-4d26-b2a4-07574d4ed9c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d6eec17-7108-4d26-b2a4-07574d4ed9c0" (UID: "9d6eec17-7108-4d26-b2a4-07574d4ed9c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:29:40.643077 kubelet[2483]: I0702 08:29:40.643048 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.643221 kubelet[2483]: I0702 08:29:40.643069 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cni-path" (OuterVolumeSpecName: "cni-path") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.643221 kubelet[2483]: I0702 08:29:40.641269 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-hostproc" (OuterVolumeSpecName: "hostproc") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.643221 kubelet[2483]: I0702 08:29:40.643090 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.644323 kubelet[2483]: I0702 08:29:40.644281 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4468e7a9-c994-4c49-80f6-a439ff82a97a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:29:40.645318 kubelet[2483]: I0702 08:29:40.645283 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4468e7a9-c994-4c49-80f6-a439ff82a97a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:29:40.645364 kubelet[2483]: I0702 08:29:40.645328 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:29:40.645893 kubelet[2483]: I0702 08:29:40.645844 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6eec17-7108-4d26-b2a4-07574d4ed9c0-kube-api-access-4q254" (OuterVolumeSpecName: "kube-api-access-4q254") pod "9d6eec17-7108-4d26-b2a4-07574d4ed9c0" (UID: "9d6eec17-7108-4d26-b2a4-07574d4ed9c0"). InnerVolumeSpecName "kube-api-access-4q254". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:29:40.717877 kubelet[2483]: E0702 08:29:40.717774 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:40.742200 kubelet[2483]: I0702 08:29:40.742171 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-host-proc-sys-net\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.742200 kubelet[2483]: I0702 08:29:40.742209 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-etc-cni-netd\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.742331 kubelet[2483]: I0702 08:29:40.742229 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-cgroup\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.742331 kubelet[2483]: I0702 08:29:40.742217 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.742331 kubelet[2483]: I0702 08:29:40.742252 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww8pn\" (UniqueName: \"kubernetes.io/projected/4468e7a9-c994-4c49-80f6-a439ff82a97a-kube-api-access-ww8pn\") pod \"4468e7a9-c994-4c49-80f6-a439ff82a97a\" (UID: \"4468e7a9-c994-4c49-80f6-a439ff82a97a\") " Jul 2 08:29:40.742331 kubelet[2483]: I0702 08:29:40.742261 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.742331 kubelet[2483]: I0702 08:29:40.742277 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:29:40.742443 kubelet[2483]: I0702 08:29:40.742292 2483 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742443 kubelet[2483]: I0702 08:29:40.742305 2483 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742443 kubelet[2483]: I0702 08:29:40.742314 2483 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4468e7a9-c994-4c49-80f6-a439ff82a97a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742443 kubelet[2483]: I0702 08:29:40.742325 2483 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742443 kubelet[2483]: I0702 08:29:40.742334 2483 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742443 kubelet[2483]: I0702 08:29:40.742346 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742443 kubelet[2483]: I0702 08:29:40.742358 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742443 kubelet[2483]: I0702 08:29:40.742368 2483 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742621 kubelet[2483]: I0702 08:29:40.742378 2483 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742621 kubelet[2483]: I0702 08:29:40.742388 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d6eec17-7108-4d26-b2a4-07574d4ed9c0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742621 kubelet[2483]: I0702 08:29:40.742397 2483 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742621 kubelet[2483]: I0702 08:29:40.742406 2483 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4468e7a9-c994-4c49-80f6-a439ff82a97a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.742621 kubelet[2483]: I0702 08:29:40.742416 2483 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4q254\" (UniqueName: \"kubernetes.io/projected/9d6eec17-7108-4d26-b2a4-07574d4ed9c0-kube-api-access-4q254\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.744536 kubelet[2483]: I0702 08:29:40.744489 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4468e7a9-c994-4c49-80f6-a439ff82a97a-kube-api-access-ww8pn" (OuterVolumeSpecName: "kube-api-access-ww8pn") pod "4468e7a9-c994-4c49-80f6-a439ff82a97a" (UID: "4468e7a9-c994-4c49-80f6-a439ff82a97a"). InnerVolumeSpecName "kube-api-access-ww8pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:29:40.842949 kubelet[2483]: I0702 08:29:40.842904 2483 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.842949 kubelet[2483]: I0702 08:29:40.842940 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4468e7a9-c994-4c49-80f6-a439ff82a97a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.842949 kubelet[2483]: I0702 08:29:40.842953 2483 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ww8pn\" (UniqueName: \"kubernetes.io/projected/4468e7a9-c994-4c49-80f6-a439ff82a97a-kube-api-access-ww8pn\") on node \"localhost\" DevicePath \"\"" Jul 2 08:29:40.929607 kubelet[2483]: I0702 08:29:40.929418 2483 scope.go:117] "RemoveContainer" containerID="228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb" Jul 2 08:29:40.931800 containerd[1428]: time="2024-07-02T08:29:40.931757006Z" level=info msg="RemoveContainer for \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\"" Jul 2 08:29:40.937296 containerd[1428]: time="2024-07-02T08:29:40.937254520Z" level=info msg="RemoveContainer for \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\" returns successfully" Jul 2 08:29:40.939002 kubelet[2483]: I0702 08:29:40.938970 2483 scope.go:117] "RemoveContainer" containerID="228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb" Jul 2 08:29:40.940018 systemd[1]: Removed slice kubepods-besteffort-pod9d6eec17_7108_4d26_b2a4_07574d4ed9c0.slice - libcontainer container kubepods-besteffort-pod9d6eec17_7108_4d26_b2a4_07574d4ed9c0.slice. Jul 2 08:29:40.941929 systemd[1]: Removed slice kubepods-burstable-pod4468e7a9_c994_4c49_80f6_a439ff82a97a.slice - libcontainer container kubepods-burstable-pod4468e7a9_c994_4c49_80f6_a439ff82a97a.slice. Jul 2 08:29:40.942015 systemd[1]: kubepods-burstable-pod4468e7a9_c994_4c49_80f6_a439ff82a97a.slice: Consumed 6.682s CPU time. Jul 2 08:29:40.942760 containerd[1428]: time="2024-07-02T08:29:40.939191903Z" level=error msg="ContainerStatus for \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\": not found" Jul 2 08:29:40.952354 kubelet[2483]: E0702 08:29:40.952306 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\": not found" containerID="228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb" Jul 2 08:29:40.952442 kubelet[2483]: I0702 08:29:40.952401 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb"} err="failed to get container status \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"228bb495733278513f843bf29a9d90488c49196e463b5afff9800bd9fe4968eb\": not found" Jul 2 08:29:40.952442 kubelet[2483]: I0702 08:29:40.952417 2483 scope.go:117] "RemoveContainer" containerID="85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c" Jul 2 08:29:40.956386 containerd[1428]: time="2024-07-02T08:29:40.956350558Z" level=info msg="RemoveContainer for \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\"" Jul 2 08:29:40.959426 containerd[1428]: time="2024-07-02T08:29:40.959383372Z" level=info msg="RemoveContainer for \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\" returns successfully" Jul 2 08:29:40.959877 kubelet[2483]: I0702 08:29:40.959567 2483 scope.go:117] "RemoveContainer" containerID="4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72" Jul 2 08:29:40.961008 containerd[1428]: time="2024-07-02T08:29:40.960971439Z" level=info msg="RemoveContainer for \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\"" Jul 2 08:29:40.963053 containerd[1428]: time="2024-07-02T08:29:40.963015342Z" level=info msg="RemoveContainer for \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\" returns successfully" Jul 2 08:29:40.963200 kubelet[2483]: I0702 08:29:40.963162 2483 scope.go:117] "RemoveContainer" containerID="b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66" Jul 2 08:29:40.964211 containerd[1428]: time="2024-07-02T08:29:40.964181932Z" level=info msg="RemoveContainer for \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\"" Jul 2 08:29:40.966253 containerd[1428]: time="2024-07-02T08:29:40.966217394Z" level=info msg="RemoveContainer for \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\" returns successfully" Jul 2 08:29:40.966370 kubelet[2483]: I0702 08:29:40.966345 2483 scope.go:117] "RemoveContainer" containerID="9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f" Jul 2 08:29:40.967444 containerd[1428]: time="2024-07-02T08:29:40.967418264Z" level=info msg="RemoveContainer for \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\"" Jul 2 08:29:40.969564 containerd[1428]: time="2024-07-02T08:29:40.969526806Z" level=info msg="RemoveContainer for \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\" returns successfully" Jul 2 08:29:40.969704 kubelet[2483]: I0702 08:29:40.969674 2483 scope.go:117] "RemoveContainer" containerID="438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427" Jul 2 08:29:40.970687 containerd[1428]: time="2024-07-02T08:29:40.970463718Z" level=info msg="RemoveContainer for \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\"" Jul 2 08:29:40.972414 containerd[1428]: time="2024-07-02T08:29:40.972384782Z" level=info msg="RemoveContainer for \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\" returns successfully" Jul 2 08:29:40.972651 kubelet[2483]: I0702 08:29:40.972629 2483 scope.go:117] "RemoveContainer" containerID="85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c" Jul 2 08:29:40.972870 containerd[1428]: time="2024-07-02T08:29:40.972801499Z" level=error msg="ContainerStatus for \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\": not found" Jul 2 08:29:40.972987 kubelet[2483]: E0702 08:29:40.972969 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\": not found" containerID="85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c" Jul 2 08:29:40.973019 kubelet[2483]: I0702 08:29:40.973008 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c"} err="failed to get container status \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\": rpc error: code = NotFound desc = an error occurred when try to find container \"85b7bb9b412d3a3e88d65f1c0284d5fe20eea877a0ecdebf01e0485e8d02381c\": not found" Jul 2 08:29:40.973047 kubelet[2483]: I0702 08:29:40.973019 2483 scope.go:117] "RemoveContainer" containerID="4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72" Jul 2 08:29:40.973216 containerd[1428]: time="2024-07-02T08:29:40.973187095Z" level=error msg="ContainerStatus for \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\": not found" Jul 2 08:29:40.973404 kubelet[2483]: E0702 08:29:40.973382 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\": not found" containerID="4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72" Jul 2 08:29:40.973440 kubelet[2483]: I0702 08:29:40.973419 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72"} err="failed to get container status \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e46fbc51731bdb149b6136a20e9fbdd40c38a042a9c853b0bd5945e71c67b72\": not found" Jul 2 08:29:40.973440 kubelet[2483]: I0702 08:29:40.973431 2483 scope.go:117] "RemoveContainer" containerID="b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66" Jul 2 08:29:40.973616 containerd[1428]: time="2024-07-02T08:29:40.973585892Z" level=error msg="ContainerStatus for \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\": not found" Jul 2 08:29:40.973724 kubelet[2483]: E0702 08:29:40.973708 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\": not found" containerID="b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66" Jul 2 08:29:40.973753 kubelet[2483]: I0702 08:29:40.973736 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66"} err="failed to get container status \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4dd65d941fff5f32ca789f80cd0d6c1b2e07b5b5042ec79b21f10c4ce5e3b66\": not found" Jul 2 08:29:40.973753 kubelet[2483]: I0702 08:29:40.973747 2483 scope.go:117] "RemoveContainer" containerID="9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f" Jul 2 08:29:40.973992 containerd[1428]: time="2024-07-02T08:29:40.973960729Z" level=error msg="ContainerStatus for \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\": not found" Jul 2 08:29:40.974108 kubelet[2483]: E0702 08:29:40.974090 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\": not found" containerID="9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f" Jul 2 08:29:40.974143 kubelet[2483]: I0702 08:29:40.974126 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f"} err="failed to get container status \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\": rpc error: code = NotFound desc = an error occurred when try to find container \"9380a4db2301630d2c31cb2174e3d0a6ca3d66da2b8929dcb070b16a476c031f\": not found" Jul 2 08:29:40.974143 kubelet[2483]: I0702 08:29:40.974139 2483 scope.go:117] "RemoveContainer" containerID="438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427" Jul 2 08:29:40.974370 containerd[1428]: time="2024-07-02T08:29:40.974308806Z" level=error msg="ContainerStatus for \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\": not found" Jul 2 08:29:40.974439 kubelet[2483]: E0702 08:29:40.974422 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\": not found" containerID="438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427" Jul 2 08:29:40.974469 kubelet[2483]: I0702 08:29:40.974449 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427"} err="failed to get container status \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\": rpc error: code = NotFound desc = an error occurred when try to find container \"438d01be81ada6e4cad52b8ad2571ec60a8183f20106b9b21c7791b9c62af427\": not found" Jul 2 08:29:41.401906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b-rootfs.mount: Deactivated successfully. Jul 2 08:29:41.401999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e97d07149ae53562f594cf9ee2958f06407af85853b109b11944af44dc9e67b-shm.mount: Deactivated successfully. Jul 2 08:29:41.402054 systemd[1]: var-lib-kubelet-pods-9d6eec17\x2d7108\x2d4d26\x2db2a4\x2d07574d4ed9c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4q254.mount: Deactivated successfully. Jul 2 08:29:41.402115 systemd[1]: var-lib-kubelet-pods-4468e7a9\x2dc994\x2d4c49\x2d80f6\x2da439ff82a97a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dww8pn.mount: Deactivated successfully. Jul 2 08:29:41.402185 systemd[1]: var-lib-kubelet-pods-4468e7a9\x2dc994\x2d4c49\x2d80f6\x2da439ff82a97a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:29:41.402245 systemd[1]: var-lib-kubelet-pods-4468e7a9\x2dc994\x2d4c49\x2d80f6\x2da439ff82a97a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:29:41.719846 kubelet[2483]: I0702 08:29:41.719738 2483 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4468e7a9-c994-4c49-80f6-a439ff82a97a" path="/var/lib/kubelet/pods/4468e7a9-c994-4c49-80f6-a439ff82a97a/volumes" Jul 2 08:29:41.720395 kubelet[2483]: I0702 08:29:41.720297 2483 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9d6eec17-7108-4d26-b2a4-07574d4ed9c0" path="/var/lib/kubelet/pods/9d6eec17-7108-4d26-b2a4-07574d4ed9c0/volumes" Jul 2 08:29:41.766451 kubelet[2483]: E0702 08:29:41.766414 2483 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:29:42.346895 sshd[4122]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:42.353581 systemd[1]: sshd@22-10.0.0.104:22-10.0.0.1:54948.service: Deactivated successfully. Jul 2 08:29:42.355064 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:29:42.355302 systemd[1]: session-23.scope: Consumed 1.220s CPU time. Jul 2 08:29:42.356330 systemd-logind[1412]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:29:42.365414 systemd[1]: Started sshd@23-10.0.0.104:22-10.0.0.1:41342.service - OpenSSH per-connection server daemon (10.0.0.1:41342). Jul 2 08:29:42.367033 systemd-logind[1412]: Removed session 23. Jul 2 08:29:42.398787 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 41342 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:42.399886 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:42.403586 systemd-logind[1412]: New session 24 of user core. Jul 2 08:29:42.422290 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 08:29:43.137528 sshd[4286]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:43.147291 kubelet[2483]: I0702 08:29:43.147236 2483 topology_manager.go:215] "Topology Admit Handler" podUID="5f1d7814-451e-4a71-86dd-14b284b28beb" podNamespace="kube-system" podName="cilium-b6bll" Jul 2 08:29:43.147291 kubelet[2483]: E0702 08:29:43.147291 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4468e7a9-c994-4c49-80f6-a439ff82a97a" containerName="mount-cgroup" Jul 2 08:29:43.147291 kubelet[2483]: E0702 08:29:43.147302 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d6eec17-7108-4d26-b2a4-07574d4ed9c0" containerName="cilium-operator" Jul 2 08:29:43.147658 kubelet[2483]: E0702 08:29:43.147308 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4468e7a9-c994-4c49-80f6-a439ff82a97a" containerName="cilium-agent" Jul 2 08:29:43.147658 kubelet[2483]: E0702 08:29:43.147315 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4468e7a9-c994-4c49-80f6-a439ff82a97a" containerName="apply-sysctl-overwrites" Jul 2 08:29:43.147658 kubelet[2483]: E0702 08:29:43.147323 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4468e7a9-c994-4c49-80f6-a439ff82a97a" containerName="mount-bpf-fs" Jul 2 08:29:43.147658 kubelet[2483]: E0702 08:29:43.147331 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4468e7a9-c994-4c49-80f6-a439ff82a97a" containerName="clean-cilium-state" Jul 2 08:29:43.147658 kubelet[2483]: I0702 08:29:43.147352 2483 memory_manager.go:346] "RemoveStaleState removing state" podUID="4468e7a9-c994-4c49-80f6-a439ff82a97a" containerName="cilium-agent" Jul 2 08:29:43.147658 kubelet[2483]: I0702 08:29:43.147359 2483 memory_manager.go:346] "RemoveStaleState removing state" podUID="9d6eec17-7108-4d26-b2a4-07574d4ed9c0" containerName="cilium-operator" Jul 2 08:29:43.149039 systemd[1]: sshd@23-10.0.0.104:22-10.0.0.1:41342.service: Deactivated successfully. Jul 2 08:29:43.155898 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:29:43.156954 kubelet[2483]: I0702 08:29:43.156918 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-hostproc\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157052 kubelet[2483]: I0702 08:29:43.156981 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-xtables-lock\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157052 kubelet[2483]: I0702 08:29:43.157009 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-host-proc-sys-net\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157102 kubelet[2483]: I0702 08:29:43.157073 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-lib-modules\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157131 kubelet[2483]: I0702 08:29:43.157125 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f1d7814-451e-4a71-86dd-14b284b28beb-cilium-config-path\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157342 kubelet[2483]: I0702 08:29:43.157151 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-host-proc-sys-kernel\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157342 kubelet[2483]: I0702 08:29:43.157214 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-cilium-cgroup\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157342 kubelet[2483]: I0702 08:29:43.157242 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-etc-cni-netd\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157342 kubelet[2483]: I0702 08:29:43.157342 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f1d7814-451e-4a71-86dd-14b284b28beb-clustermesh-secrets\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157443 kubelet[2483]: I0702 08:29:43.157378 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-bpf-maps\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157443 kubelet[2483]: I0702 08:29:43.157398 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f1d7814-451e-4a71-86dd-14b284b28beb-hubble-tls\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.157489 kubelet[2483]: I0702 08:29:43.157420 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5f1d7814-451e-4a71-86dd-14b284b28beb-cilium-ipsec-secrets\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.158189 kubelet[2483]: I0702 08:29:43.157563 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-cilium-run\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.158189 kubelet[2483]: I0702 08:29:43.157604 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f1d7814-451e-4a71-86dd-14b284b28beb-cni-path\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.158189 kubelet[2483]: I0702 08:29:43.157664 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrgh\" (UniqueName: \"kubernetes.io/projected/5f1d7814-451e-4a71-86dd-14b284b28beb-kube-api-access-4vrgh\") pod \"cilium-b6bll\" (UID: \"5f1d7814-451e-4a71-86dd-14b284b28beb\") " pod="kube-system/cilium-b6bll" Jul 2 08:29:43.165506 systemd-logind[1412]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:29:43.177495 systemd[1]: Started sshd@24-10.0.0.104:22-10.0.0.1:41346.service - OpenSSH per-connection server daemon (10.0.0.1:41346). Jul 2 08:29:43.180632 systemd-logind[1412]: Removed session 24. Jul 2 08:29:43.182891 systemd[1]: Created slice kubepods-burstable-pod5f1d7814_451e_4a71_86dd_14b284b28beb.slice - libcontainer container kubepods-burstable-pod5f1d7814_451e_4a71_86dd_14b284b28beb.slice. Jul 2 08:29:43.210228 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 41346 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:43.211043 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:43.214647 systemd-logind[1412]: New session 25 of user core. Jul 2 08:29:43.221325 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 08:29:43.278263 sshd[4299]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:43.288873 systemd[1]: sshd@24-10.0.0.104:22-10.0.0.1:41346.service: Deactivated successfully. Jul 2 08:29:43.290867 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:29:43.293181 systemd-logind[1412]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:29:43.305688 systemd[1]: Started sshd@25-10.0.0.104:22-10.0.0.1:41358.service - OpenSSH per-connection server daemon (10.0.0.1:41358). Jul 2 08:29:43.306589 systemd-logind[1412]: Removed session 25. Jul 2 08:29:43.334417 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 41358 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:29:43.335545 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:29:43.339779 systemd-logind[1412]: New session 26 of user core. Jul 2 08:29:43.349316 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 08:29:43.477141 kubelet[2483]: I0702 08:29:43.477019 2483 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:29:43Z","lastTransitionTime":"2024-07-02T08:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:29:43.490587 kubelet[2483]: E0702 08:29:43.490524 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:43.491585 containerd[1428]: time="2024-07-02T08:29:43.491210856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b6bll,Uid:5f1d7814-451e-4a71-86dd-14b284b28beb,Namespace:kube-system,Attempt:0,}" Jul 2 08:29:43.509129 containerd[1428]: time="2024-07-02T08:29:43.508905648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:29:43.509129 containerd[1428]: time="2024-07-02T08:29:43.508951488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:29:43.509129 containerd[1428]: time="2024-07-02T08:29:43.508965608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:29:43.509129 containerd[1428]: time="2024-07-02T08:29:43.508974928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:29:43.525343 systemd[1]: Started cri-containerd-c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e.scope - libcontainer container c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e. Jul 2 08:29:43.543642 containerd[1428]: time="2024-07-02T08:29:43.543534196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b6bll,Uid:5f1d7814-451e-4a71-86dd-14b284b28beb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\"" Jul 2 08:29:43.544490 kubelet[2483]: E0702 08:29:43.544470 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:43.547461 containerd[1428]: time="2024-07-02T08:29:43.547427137Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:29:43.558353 containerd[1428]: time="2024-07-02T08:29:43.558233683Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e50ad0775708412cf69996df9c6fa76744adef07b9162ebdcf10f9abb15dc867\"" Jul 2 08:29:43.559017 containerd[1428]: time="2024-07-02T08:29:43.558991639Z" level=info msg="StartContainer for \"e50ad0775708412cf69996df9c6fa76744adef07b9162ebdcf10f9abb15dc867\"" Jul 2 08:29:43.592353 systemd[1]: Started cri-containerd-e50ad0775708412cf69996df9c6fa76744adef07b9162ebdcf10f9abb15dc867.scope - libcontainer container e50ad0775708412cf69996df9c6fa76744adef07b9162ebdcf10f9abb15dc867. Jul 2 08:29:43.613500 containerd[1428]: time="2024-07-02T08:29:43.613388769Z" level=info msg="StartContainer for \"e50ad0775708412cf69996df9c6fa76744adef07b9162ebdcf10f9abb15dc867\" returns successfully" Jul 2 08:29:43.636602 systemd[1]: cri-containerd-e50ad0775708412cf69996df9c6fa76744adef07b9162ebdcf10f9abb15dc867.scope: Deactivated successfully. Jul 2 08:29:43.664008 containerd[1428]: time="2024-07-02T08:29:43.663950037Z" level=info msg="shim disconnected" id=e50ad0775708412cf69996df9c6fa76744adef07b9162ebdcf10f9abb15dc867 namespace=k8s.io Jul 2 08:29:43.664008 containerd[1428]: time="2024-07-02T08:29:43.664001397Z" level=warning msg="cleaning up after shim disconnected" id=e50ad0775708412cf69996df9c6fa76744adef07b9162ebdcf10f9abb15dc867 namespace=k8s.io Jul 2 08:29:43.664008 containerd[1428]: time="2024-07-02T08:29:43.664009837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:29:43.942112 kubelet[2483]: E0702 08:29:43.941936 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:43.944203 containerd[1428]: time="2024-07-02T08:29:43.943819885Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:29:43.954266 containerd[1428]: time="2024-07-02T08:29:43.954211713Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7d834d6691a255859957d0fdfd5f2b35767c4e674b6831e7683ecb63ddd9c33b\"" Jul 2 08:29:43.955504 containerd[1428]: time="2024-07-02T08:29:43.954705431Z" level=info msg="StartContainer for \"7d834d6691a255859957d0fdfd5f2b35767c4e674b6831e7683ecb63ddd9c33b\"" Jul 2 08:29:43.986386 systemd[1]: Started cri-containerd-7d834d6691a255859957d0fdfd5f2b35767c4e674b6831e7683ecb63ddd9c33b.scope - libcontainer container 7d834d6691a255859957d0fdfd5f2b35767c4e674b6831e7683ecb63ddd9c33b. Jul 2 08:29:44.007470 containerd[1428]: time="2024-07-02T08:29:44.007429096Z" level=info msg="StartContainer for \"7d834d6691a255859957d0fdfd5f2b35767c4e674b6831e7683ecb63ddd9c33b\" returns successfully" Jul 2 08:29:44.013835 systemd[1]: cri-containerd-7d834d6691a255859957d0fdfd5f2b35767c4e674b6831e7683ecb63ddd9c33b.scope: Deactivated successfully. Jul 2 08:29:44.033213 containerd[1428]: time="2024-07-02T08:29:44.032994636Z" level=info msg="shim disconnected" id=7d834d6691a255859957d0fdfd5f2b35767c4e674b6831e7683ecb63ddd9c33b namespace=k8s.io Jul 2 08:29:44.033213 containerd[1428]: time="2024-07-02T08:29:44.033055036Z" level=warning msg="cleaning up after shim disconnected" id=7d834d6691a255859957d0fdfd5f2b35767c4e674b6831e7683ecb63ddd9c33b namespace=k8s.io Jul 2 08:29:44.033213 containerd[1428]: time="2024-07-02T08:29:44.033062836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:29:44.717659 kubelet[2483]: E0702 08:29:44.717584 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:44.950969 kubelet[2483]: E0702 08:29:44.950924 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:44.955987 containerd[1428]: time="2024-07-02T08:29:44.955851134Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:29:44.975465 containerd[1428]: time="2024-07-02T08:29:44.975136340Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52\"" Jul 2 08:29:44.977521 containerd[1428]: time="2024-07-02T08:29:44.976363135Z" level=info msg="StartContainer for \"a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52\"" Jul 2 08:29:45.009421 systemd[1]: Started cri-containerd-a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52.scope - libcontainer container a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52. Jul 2 08:29:45.031991 containerd[1428]: time="2024-07-02T08:29:45.031926432Z" level=info msg="StartContainer for \"a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52\" returns successfully" Jul 2 08:29:45.033362 systemd[1]: cri-containerd-a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52.scope: Deactivated successfully. Jul 2 08:29:45.062845 containerd[1428]: time="2024-07-02T08:29:45.062773625Z" level=info msg="shim disconnected" id=a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52 namespace=k8s.io Jul 2 08:29:45.062845 containerd[1428]: time="2024-07-02T08:29:45.062829024Z" level=warning msg="cleaning up after shim disconnected" id=a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52 namespace=k8s.io Jul 2 08:29:45.062845 containerd[1428]: time="2024-07-02T08:29:45.062844664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:29:45.262376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a133b36e160967f337c7cf19c0ecb48833bced47810df5e8af2e88fa902f8d52-rootfs.mount: Deactivated successfully. Jul 2 08:29:45.956280 kubelet[2483]: E0702 08:29:45.956240 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:45.959117 containerd[1428]: time="2024-07-02T08:29:45.959068654Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:29:45.971530 containerd[1428]: time="2024-07-02T08:29:45.971325220Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c\"" Jul 2 08:29:45.975994 containerd[1428]: time="2024-07-02T08:29:45.973613933Z" level=info msg="StartContainer for \"a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c\"" Jul 2 08:29:46.003342 systemd[1]: Started cri-containerd-a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c.scope - libcontainer container a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c. Jul 2 08:29:46.022777 systemd[1]: cri-containerd-a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c.scope: Deactivated successfully. Jul 2 08:29:46.026925 containerd[1428]: time="2024-07-02T08:29:46.026875769Z" level=info msg="StartContainer for \"a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c\" returns successfully" Jul 2 08:29:46.047043 containerd[1428]: time="2024-07-02T08:29:46.046972053Z" level=info msg="shim disconnected" id=a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c namespace=k8s.io Jul 2 08:29:46.047043 containerd[1428]: time="2024-07-02T08:29:46.047025413Z" level=warning msg="cleaning up after shim disconnected" id=a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c namespace=k8s.io Jul 2 08:29:46.047043 containerd[1428]: time="2024-07-02T08:29:46.047033853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:29:46.262444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a591758355c52bf05e1b9fa1ec5c3cf7e4153800a08e94e0a4ffa1810a5e105c-rootfs.mount: Deactivated successfully. Jul 2 08:29:46.767708 kubelet[2483]: E0702 08:29:46.767667 2483 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:29:46.960017 kubelet[2483]: E0702 08:29:46.959778 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:46.962200 containerd[1428]: time="2024-07-02T08:29:46.961914928Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:29:46.977391 containerd[1428]: time="2024-07-02T08:29:46.977336580Z" level=info msg="CreateContainer within sandbox \"c567eb6632eb4f5719485ab3c9a08f9eda61aa54fc648fabb5ffac7ee7a21b7e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e83e9f9eff848dec42bbc6da1fa509dd6f6eaa37a4b784f875b078a9febaba7\"" Jul 2 08:29:46.979175 containerd[1428]: time="2024-07-02T08:29:46.979095937Z" level=info msg="StartContainer for \"3e83e9f9eff848dec42bbc6da1fa509dd6f6eaa37a4b784f875b078a9febaba7\"" Jul 2 08:29:47.007323 systemd[1]: Started cri-containerd-3e83e9f9eff848dec42bbc6da1fa509dd6f6eaa37a4b784f875b078a9febaba7.scope - libcontainer container 3e83e9f9eff848dec42bbc6da1fa509dd6f6eaa37a4b784f875b078a9febaba7. Jul 2 08:29:47.032518 containerd[1428]: time="2024-07-02T08:29:47.032472192Z" level=info msg="StartContainer for \"3e83e9f9eff848dec42bbc6da1fa509dd6f6eaa37a4b784f875b078a9febaba7\" returns successfully" Jul 2 08:29:47.262556 systemd[1]: run-containerd-runc-k8s.io-3e83e9f9eff848dec42bbc6da1fa509dd6f6eaa37a4b784f875b078a9febaba7-runc.KiLkdA.mount: Deactivated successfully. Jul 2 08:29:47.296243 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 08:29:47.965207 kubelet[2483]: E0702 08:29:47.964702 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:47.977669 kubelet[2483]: I0702 08:29:47.977608 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-b6bll" podStartSLOduration=4.977572631 podCreationTimestamp="2024-07-02 08:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:29:47.977088071 +0000 UTC m=+86.365234675" watchObservedRunningTime="2024-07-02 08:29:47.977572631 +0000 UTC m=+86.365719235" Jul 2 08:29:49.491943 kubelet[2483]: E0702 08:29:49.491902 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:49.718083 kubelet[2483]: E0702 08:29:49.717730 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:50.052456 systemd-networkd[1375]: lxc_health: Link UP Jul 2 08:29:50.060388 systemd-networkd[1375]: lxc_health: Gained carrier Jul 2 08:29:51.494085 kubelet[2483]: E0702 08:29:51.494049 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:51.746262 systemd[1]: run-containerd-runc-k8s.io-3e83e9f9eff848dec42bbc6da1fa509dd6f6eaa37a4b784f875b078a9febaba7-runc.uXeBPY.mount: Deactivated successfully. Jul 2 08:29:51.974061 kubelet[2483]: E0702 08:29:51.974010 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:52.001287 systemd-networkd[1375]: lxc_health: Gained IPv6LL Jul 2 08:29:52.975597 kubelet[2483]: E0702 08:29:52.975516 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:29:56.009068 sshd[4311]: pam_unix(sshd:session): session closed for user core Jul 2 08:29:56.011945 systemd[1]: sshd@25-10.0.0.104:22-10.0.0.1:41358.service: Deactivated successfully. Jul 2 08:29:56.013943 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 08:29:56.015578 systemd-logind[1412]: Session 26 logged out. Waiting for processes to exit. Jul 2 08:29:56.016403 systemd-logind[1412]: Removed session 26.