Jun 25 18:30:10.884015 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 18:30:10.884035 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:30:10.884045 kernel: KASLR enabled Jun 25 18:30:10.884051 kernel: efi: EFI v2.7 by EDK II Jun 25 18:30:10.884057 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jun 25 18:30:10.884063 kernel: random: crng init done Jun 25 18:30:10.884070 kernel: ACPI: Early table checksum verification disabled Jun 25 18:30:10.884075 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jun 25 18:30:10.884081 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 25 18:30:10.884089 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884095 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884101 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884106 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884112 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884120 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884127 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884134 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884140 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:30:10.884147 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 25 18:30:10.884153 kernel: NUMA: Failed to initialise from firmware Jun 25 18:30:10.884160 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:30:10.884166 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jun 25 18:30:10.884172 kernel: Zone ranges: Jun 25 18:30:10.884178 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:30:10.884185 kernel: DMA32 empty Jun 25 18:30:10.884192 kernel: Normal empty Jun 25 18:30:10.884198 kernel: Movable zone start for each node Jun 25 18:30:10.884204 kernel: Early memory node ranges Jun 25 18:30:10.884211 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jun 25 18:30:10.884217 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jun 25 18:30:10.884223 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jun 25 18:30:10.884229 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jun 25 18:30:10.884236 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jun 25 18:30:10.884242 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jun 25 18:30:10.884248 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jun 25 18:30:10.884255 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:30:10.884261 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 25 18:30:10.884268 kernel: psci: probing for conduit method from ACPI. Jun 25 18:30:10.884275 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 18:30:10.884281 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:30:10.884290 kernel: psci: Trusted OS migration not required Jun 25 18:30:10.884296 kernel: psci: SMC Calling Convention v1.1 Jun 25 18:30:10.884303 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 25 18:30:10.884311 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:30:10.884318 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:30:10.884325 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 25 18:30:10.884331 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:30:10.884338 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:30:10.884345 kernel: CPU features: detected: Hardware dirty bit management Jun 25 18:30:10.884351 kernel: CPU features: detected: Spectre-v4 Jun 25 18:30:10.884358 kernel: CPU features: detected: Spectre-BHB Jun 25 18:30:10.884365 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 18:30:10.884371 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 18:30:10.884379 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 18:30:10.884386 kernel: alternatives: applying boot alternatives Jun 25 18:30:10.884393 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:30:10.884409 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:30:10.884416 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:30:10.884423 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:30:10.884430 kernel: Fallback order for Node 0: 0 Jun 25 18:30:10.884436 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jun 25 18:30:10.884443 kernel: Policy zone: DMA Jun 25 18:30:10.884450 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:30:10.884456 kernel: software IO TLB: area num 4. Jun 25 18:30:10.884465 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jun 25 18:30:10.884472 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jun 25 18:30:10.884479 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:30:10.884486 kernel: trace event string verifier disabled Jun 25 18:30:10.884492 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:30:10.884500 kernel: rcu: RCU event tracing is enabled. Jun 25 18:30:10.884506 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:30:10.884513 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:30:10.884520 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:30:10.884527 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:30:10.884534 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:30:10.884540 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:30:10.884548 kernel: GICv3: 256 SPIs implemented Jun 25 18:30:10.884555 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:30:10.884562 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:30:10.884568 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 18:30:10.884575 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 25 18:30:10.884581 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 25 18:30:10.884588 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 18:30:10.884595 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jun 25 18:30:10.884602 kernel: GICv3: using LPI property table @0x00000000400f0000 Jun 25 18:30:10.884609 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jun 25 18:30:10.884615 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:30:10.884623 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:30:10.884630 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 18:30:10.884637 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 18:30:10.884644 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 18:30:10.884651 kernel: arm-pv: using stolen time PV Jun 25 18:30:10.884657 kernel: Console: colour dummy device 80x25 Jun 25 18:30:10.884664 kernel: ACPI: Core revision 20230628 Jun 25 18:30:10.884671 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 18:30:10.884678 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:30:10.884685 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:30:10.884693 kernel: SELinux: Initializing. Jun 25 18:30:10.884700 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:30:10.884707 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:30:10.884714 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:30:10.884721 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:30:10.884728 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:30:10.884735 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:30:10.884742 kernel: Platform MSI: ITS@0x8080000 domain created Jun 25 18:30:10.884749 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 25 18:30:10.884764 kernel: Remapping and enabling EFI services. Jun 25 18:30:10.884772 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:30:10.884779 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:30:10.884786 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 25 18:30:10.884793 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jun 25 18:30:10.884800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:30:10.884807 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 18:30:10.884814 kernel: Detected PIPT I-cache on CPU2 Jun 25 18:30:10.884821 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 25 18:30:10.884828 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jun 25 18:30:10.884836 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:30:10.884843 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 25 18:30:10.884855 kernel: Detected PIPT I-cache on CPU3 Jun 25 18:30:10.884863 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 25 18:30:10.884871 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jun 25 18:30:10.884878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:30:10.884885 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 25 18:30:10.884892 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:30:10.884899 kernel: SMP: Total of 4 processors activated. Jun 25 18:30:10.884908 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:30:10.884915 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 18:30:10.884923 kernel: CPU features: detected: Common not Private translations Jun 25 18:30:10.884930 kernel: CPU features: detected: CRC32 instructions Jun 25 18:30:10.884937 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 25 18:30:10.884944 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 18:30:10.884951 kernel: CPU features: detected: LSE atomic instructions Jun 25 18:30:10.884959 kernel: CPU features: detected: Privileged Access Never Jun 25 18:30:10.884967 kernel: CPU features: detected: RAS Extension Support Jun 25 18:30:10.884974 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 25 18:30:10.884981 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:30:10.884989 kernel: alternatives: applying system-wide alternatives Jun 25 18:30:10.884996 kernel: devtmpfs: initialized Jun 25 18:30:10.885003 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:30:10.885011 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:30:10.885018 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:30:10.885025 kernel: SMBIOS 3.0.0 present. Jun 25 18:30:10.885033 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jun 25 18:30:10.885041 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:30:10.885048 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:30:10.885055 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:30:10.885063 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:30:10.885070 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:30:10.885078 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jun 25 18:30:10.885085 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:30:10.885092 kernel: cpuidle: using governor menu Jun 25 18:30:10.885101 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:30:10.885108 kernel: ASID allocator initialised with 32768 entries Jun 25 18:30:10.885115 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:30:10.885122 kernel: Serial: AMBA PL011 UART driver Jun 25 18:30:10.885130 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 25 18:30:10.885137 kernel: Modules: 0 pages in range for non-PLT usage Jun 25 18:30:10.885144 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:30:10.885151 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:30:10.885159 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:30:10.885167 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:30:10.885174 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:30:10.885182 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:30:10.885189 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:30:10.885196 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:30:10.885203 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:30:10.885211 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:30:10.885218 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:30:10.885225 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:30:10.885234 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:30:10.885241 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:30:10.885248 kernel: ACPI: Interpreter enabled Jun 25 18:30:10.885255 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:30:10.885262 kernel: ACPI: MCFG table detected, 1 entries Jun 25 18:30:10.885269 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 25 18:30:10.885277 kernel: printk: console [ttyAMA0] enabled Jun 25 18:30:10.885284 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:30:10.885415 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:30:10.885493 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 18:30:10.885559 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 18:30:10.885621 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 25 18:30:10.885683 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 25 18:30:10.885693 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 25 18:30:10.885700 kernel: PCI host bridge to bus 0000:00 Jun 25 18:30:10.885791 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 25 18:30:10.885856 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 18:30:10.885911 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 25 18:30:10.885967 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:30:10.886044 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 25 18:30:10.886118 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:30:10.886182 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jun 25 18:30:10.886250 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jun 25 18:30:10.886314 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 18:30:10.886378 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 18:30:10.886451 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jun 25 18:30:10.886515 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jun 25 18:30:10.886572 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 25 18:30:10.886629 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 18:30:10.886688 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 25 18:30:10.886698 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 18:30:10.886706 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 18:30:10.886713 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 18:30:10.886720 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 18:30:10.886728 kernel: iommu: Default domain type: Translated Jun 25 18:30:10.886735 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:30:10.886742 kernel: efivars: Registered efivars operations Jun 25 18:30:10.886749 kernel: vgaarb: loaded Jun 25 18:30:10.886773 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:30:10.886782 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:30:10.886789 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:30:10.886796 kernel: pnp: PnP ACPI init Jun 25 18:30:10.886876 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 25 18:30:10.886887 kernel: pnp: PnP ACPI: found 1 devices Jun 25 18:30:10.886894 kernel: NET: Registered PF_INET protocol family Jun 25 18:30:10.886902 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:30:10.886912 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:30:10.886920 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:30:10.886927 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:30:10.886934 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:30:10.886942 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:30:10.886949 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:30:10.886956 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:30:10.886964 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:30:10.886971 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:30:10.886979 kernel: kvm [1]: HYP mode not available Jun 25 18:30:10.886986 kernel: Initialise system trusted keyrings Jun 25 18:30:10.886994 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:30:10.887001 kernel: Key type asymmetric registered Jun 25 18:30:10.887008 kernel: Asymmetric key parser 'x509' registered Jun 25 18:30:10.887016 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:30:10.887023 kernel: io scheduler mq-deadline registered Jun 25 18:30:10.887030 kernel: io scheduler kyber registered Jun 25 18:30:10.887038 kernel: io scheduler bfq registered Jun 25 18:30:10.887046 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 18:30:10.887054 kernel: ACPI: button: Power Button [PWRB] Jun 25 18:30:10.887061 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 18:30:10.887127 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 25 18:30:10.887137 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:30:10.887144 kernel: thunder_xcv, ver 1.0 Jun 25 18:30:10.887151 kernel: thunder_bgx, ver 1.0 Jun 25 18:30:10.887158 kernel: nicpf, ver 1.0 Jun 25 18:30:10.887165 kernel: nicvf, ver 1.0 Jun 25 18:30:10.887237 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:30:10.887297 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:30:10 UTC (1719340210) Jun 25 18:30:10.887307 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:30:10.887315 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 25 18:30:10.887322 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:30:10.887329 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:30:10.887337 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:30:10.887344 kernel: Segment Routing with IPv6 Jun 25 18:30:10.887353 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:30:10.887360 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:30:10.887367 kernel: Key type dns_resolver registered Jun 25 18:30:10.887374 kernel: registered taskstats version 1 Jun 25 18:30:10.887381 kernel: Loading compiled-in X.509 certificates Jun 25 18:30:10.887389 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:30:10.887402 kernel: Key type .fscrypt registered Jun 25 18:30:10.887410 kernel: Key type fscrypt-provisioning registered Jun 25 18:30:10.887417 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:30:10.887426 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:30:10.887433 kernel: ima: No architecture policies found Jun 25 18:30:10.887440 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:30:10.887448 kernel: clk: Disabling unused clocks Jun 25 18:30:10.887455 kernel: Freeing unused kernel memory: 39040K Jun 25 18:30:10.887462 kernel: Run /init as init process Jun 25 18:30:10.887469 kernel: with arguments: Jun 25 18:30:10.887477 kernel: /init Jun 25 18:30:10.887484 kernel: with environment: Jun 25 18:30:10.887492 kernel: HOME=/ Jun 25 18:30:10.887499 kernel: TERM=linux Jun 25 18:30:10.887506 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:30:10.887515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:30:10.887525 systemd[1]: Detected virtualization kvm. Jun 25 18:30:10.887533 systemd[1]: Detected architecture arm64. Jun 25 18:30:10.887540 systemd[1]: Running in initrd. Jun 25 18:30:10.887548 systemd[1]: No hostname configured, using default hostname. Jun 25 18:30:10.887557 systemd[1]: Hostname set to . Jun 25 18:30:10.887565 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:30:10.887573 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:30:10.887580 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:30:10.887588 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:30:10.887596 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:30:10.887604 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:30:10.887614 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:30:10.887622 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:30:10.887631 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:30:10.887639 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:30:10.887647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:30:10.887655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:30:10.887663 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:30:10.887672 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:30:10.887680 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:30:10.887687 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:30:10.887695 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:30:10.887703 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:30:10.887711 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:30:10.887718 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:30:10.887726 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:30:10.887734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:30:10.887743 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:30:10.887751 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:30:10.887775 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:30:10.887785 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:30:10.887793 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:30:10.887800 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:30:10.887808 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:30:10.887816 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:30:10.887824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:30:10.887834 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:30:10.887842 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:30:10.887850 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:30:10.887858 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:30:10.887885 systemd-journald[238]: Collecting audit messages is disabled. Jun 25 18:30:10.887904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:30:10.887912 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:30:10.887921 systemd-journald[238]: Journal started Jun 25 18:30:10.887940 systemd-journald[238]: Runtime Journal (/run/log/journal/4c799e4c08ef42a6afc9dfde5588c5b8) is 5.9M, max 47.3M, 41.4M free. Jun 25 18:30:10.880039 systemd-modules-load[239]: Inserted module 'overlay' Jun 25 18:30:10.889251 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:30:10.892673 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:30:10.895282 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:30:10.895919 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:30:10.898211 systemd-modules-load[239]: Inserted module 'br_netfilter' Jun 25 18:30:10.898903 kernel: Bridge firewalling registered Jun 25 18:30:10.899914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:30:10.901220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:30:10.905321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:30:10.908161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:30:10.912069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:30:10.913941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:30:10.916141 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:30:10.917748 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:30:10.919570 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:30:10.931934 dracut-cmdline[275]: dracut-dracut-053 Jun 25 18:30:10.934288 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:30:10.943611 systemd-resolved[274]: Positive Trust Anchors: Jun 25 18:30:10.943628 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:30:10.943659 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:30:10.949907 systemd-resolved[274]: Defaulting to hostname 'linux'. Jun 25 18:30:10.950801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:30:10.951798 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:30:10.995785 kernel: SCSI subsystem initialized Jun 25 18:30:10.999779 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:30:11.007798 kernel: iscsi: registered transport (tcp) Jun 25 18:30:11.021928 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:30:11.021948 kernel: QLogic iSCSI HBA Driver Jun 25 18:30:11.061880 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:30:11.072876 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:30:11.088282 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:30:11.088321 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:30:11.088344 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:30:11.135778 kernel: raid6: neonx8 gen() 15707 MB/s Jun 25 18:30:11.151774 kernel: raid6: neonx4 gen() 15600 MB/s Jun 25 18:30:11.168782 kernel: raid6: neonx2 gen() 13174 MB/s Jun 25 18:30:11.185774 kernel: raid6: neonx1 gen() 10438 MB/s Jun 25 18:30:11.202775 kernel: raid6: int64x8 gen() 6933 MB/s Jun 25 18:30:11.219781 kernel: raid6: int64x4 gen() 7321 MB/s Jun 25 18:30:11.236771 kernel: raid6: int64x2 gen() 6109 MB/s Jun 25 18:30:11.253775 kernel: raid6: int64x1 gen() 5040 MB/s Jun 25 18:30:11.253791 kernel: raid6: using algorithm neonx8 gen() 15707 MB/s Jun 25 18:30:11.270777 kernel: raid6: .... xor() 11886 MB/s, rmw enabled Jun 25 18:30:11.270792 kernel: raid6: using neon recovery algorithm Jun 25 18:30:11.275887 kernel: xor: measuring software checksum speed Jun 25 18:30:11.275915 kernel: 8regs : 19878 MB/sec Jun 25 18:30:11.276776 kernel: 32regs : 19716 MB/sec Jun 25 18:30:11.277897 kernel: arm64_neon : 27152 MB/sec Jun 25 18:30:11.277909 kernel: xor: using function: arm64_neon (27152 MB/sec) Jun 25 18:30:11.327779 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:30:11.338818 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:30:11.352902 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:30:11.363461 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jun 25 18:30:11.366570 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:30:11.368202 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:30:11.382529 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jun 25 18:30:11.407899 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:30:11.414897 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:30:11.453235 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:30:11.456941 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:30:11.469460 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:30:11.472504 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:30:11.473748 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:30:11.475590 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:30:11.483914 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:30:11.493898 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:30:11.502124 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 25 18:30:11.510264 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:30:11.510363 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:30:11.510374 kernel: GPT:9289727 != 19775487 Jun 25 18:30:11.510383 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:30:11.510401 kernel: GPT:9289727 != 19775487 Jun 25 18:30:11.510412 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:30:11.510421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:30:11.513085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:30:11.513190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:30:11.515048 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:30:11.516201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:30:11.516334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:30:11.518925 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:30:11.529028 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (524) Jun 25 18:30:11.529064 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (511) Jun 25 18:30:11.533996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:30:11.545858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:30:11.550382 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:30:11.557435 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:30:11.561718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:30:11.565543 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:30:11.566562 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:30:11.578983 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:30:11.580879 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:30:11.585336 disk-uuid[551]: Primary Header is updated. Jun 25 18:30:11.585336 disk-uuid[551]: Secondary Entries is updated. Jun 25 18:30:11.585336 disk-uuid[551]: Secondary Header is updated. Jun 25 18:30:11.589027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:30:11.601857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:30:12.600609 disk-uuid[553]: The operation has completed successfully. Jun 25 18:30:12.601449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:30:12.620147 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:30:12.620249 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:30:12.642928 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:30:12.646652 sh[574]: Success Jun 25 18:30:12.664814 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:30:12.691435 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:30:12.707175 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:30:12.708563 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:30:12.718308 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:30:12.718343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:30:12.718354 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:30:12.719073 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:30:12.720091 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:30:12.723295 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:30:12.724408 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:30:12.733893 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:30:12.735486 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:30:12.742218 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:30:12.742256 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:30:12.742272 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:30:12.744835 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:30:12.751452 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:30:12.752842 kernel: BTRFS info (device vda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:30:12.759264 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:30:12.765941 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:30:12.833680 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:30:12.844781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:30:12.861057 ignition[667]: Ignition 2.19.0 Jun 25 18:30:12.861068 ignition[667]: Stage: fetch-offline Jun 25 18:30:12.861102 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:30:12.861110 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:30:12.861198 ignition[667]: parsed url from cmdline: "" Jun 25 18:30:12.861201 ignition[667]: no config URL provided Jun 25 18:30:12.861206 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:30:12.861213 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:30:12.861236 ignition[667]: op(1): [started] loading QEMU firmware config module Jun 25 18:30:12.861240 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:30:12.868663 ignition[667]: op(1): [finished] loading QEMU firmware config module Jun 25 18:30:12.870981 systemd-networkd[765]: lo: Link UP Jun 25 18:30:12.870993 systemd-networkd[765]: lo: Gained carrier Jun 25 18:30:12.871976 systemd-networkd[765]: Enumeration completed Jun 25 18:30:12.872399 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:30:12.872403 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:30:12.873502 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:30:12.873546 systemd-networkd[765]: eth0: Link UP Jun 25 18:30:12.873550 systemd-networkd[765]: eth0: Gained carrier Jun 25 18:30:12.873557 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:30:12.874836 systemd[1]: Reached target network.target - Network. Jun 25 18:30:12.893801 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:30:12.916217 ignition[667]: parsing config with SHA512: 73183f7359ce269df629decb992579b84b1576efa6a5b97d821c813867d1ae65b14e7ddfb40c02aea5057f0e0f807a49d0ae84c6a1f28a4088e38014e0ea5b6f Jun 25 18:30:12.920294 unknown[667]: fetched base config from "system" Jun 25 18:30:12.920311 unknown[667]: fetched user config from "qemu" Jun 25 18:30:12.920730 ignition[667]: fetch-offline: fetch-offline passed Jun 25 18:30:12.920810 ignition[667]: Ignition finished successfully Jun 25 18:30:12.922606 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:30:12.924125 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:30:12.935914 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:30:12.947513 ignition[772]: Ignition 2.19.0 Jun 25 18:30:12.947523 ignition[772]: Stage: kargs Jun 25 18:30:12.947670 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:30:12.947681 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:30:12.950718 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:30:12.948557 ignition[772]: kargs: kargs passed Jun 25 18:30:12.948601 ignition[772]: Ignition finished successfully Jun 25 18:30:12.960991 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:30:12.970523 ignition[781]: Ignition 2.19.0 Jun 25 18:30:12.970533 ignition[781]: Stage: disks Jun 25 18:30:12.970687 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:30:12.970696 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:30:12.971528 ignition[781]: disks: disks passed Jun 25 18:30:12.973819 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:30:12.971573 ignition[781]: Ignition finished successfully Jun 25 18:30:12.974879 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:30:12.976021 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:30:12.977375 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:30:12.978952 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:30:12.980435 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:30:12.990945 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:30:13.002138 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:30:13.008921 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:30:13.019864 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:30:13.061782 kernel: EXT4-fs (vda9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:30:13.062022 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:30:13.063050 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:30:13.077859 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:30:13.079492 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:30:13.080685 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:30:13.080725 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:30:13.087873 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Jun 25 18:30:13.087896 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:30:13.087906 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:30:13.087924 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:30:13.080747 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:30:13.087043 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:30:13.091258 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:30:13.089355 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:30:13.091607 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:30:13.130748 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:30:13.133884 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:30:13.136913 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:30:13.139916 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:30:13.209921 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:30:13.224863 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:30:13.227189 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:30:13.231776 kernel: BTRFS info (device vda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:30:13.247862 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:30:13.249515 ignition[914]: INFO : Ignition 2.19.0 Jun 25 18:30:13.249515 ignition[914]: INFO : Stage: mount Jun 25 18:30:13.249515 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:30:13.249515 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:30:13.249515 ignition[914]: INFO : mount: mount passed Jun 25 18:30:13.249515 ignition[914]: INFO : Ignition finished successfully Jun 25 18:30:13.250486 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:30:13.258874 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:30:13.717590 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:30:13.730959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:30:13.735775 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Jun 25 18:30:13.735803 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:30:13.737218 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:30:13.737776 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:30:13.739777 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:30:13.740619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:30:13.756852 ignition[945]: INFO : Ignition 2.19.0 Jun 25 18:30:13.756852 ignition[945]: INFO : Stage: files Jun 25 18:30:13.758032 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:30:13.758032 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:30:13.760459 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:30:13.761541 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:30:13.761541 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:30:13.763788 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:30:13.763788 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:30:13.763788 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:30:13.763269 unknown[945]: wrote ssh authorized keys file for user: core Jun 25 18:30:13.767712 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:30:13.767712 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:30:13.987702 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:30:14.028199 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:30:14.028199 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:30:14.031109 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 25 18:30:14.358654 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:30:14.407037 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:30:14.408510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 18:30:14.647891 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:30:14.827585 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:30:14.827585 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 18:30:14.830522 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:30:14.830522 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:30:14.830522 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 18:30:14.830522 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 25 18:30:14.830522 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:30:14.830522 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:30:14.830522 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 25 18:30:14.830522 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:30:14.849312 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:30:14.852699 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:30:14.853847 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:30:14.853847 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:30:14.853847 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:30:14.853847 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:30:14.853847 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:30:14.853847 ignition[945]: INFO : files: files passed Jun 25 18:30:14.853847 ignition[945]: INFO : Ignition finished successfully Jun 25 18:30:14.855375 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:30:14.869896 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:30:14.872646 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:30:14.875116 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:30:14.875213 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:30:14.878288 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:30:14.881015 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:30:14.881015 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:30:14.883295 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:30:14.882906 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:30:14.884470 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:30:14.896880 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:30:14.904919 systemd-networkd[765]: eth0: Gained IPv6LL Jun 25 18:30:14.915291 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:30:14.915422 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:30:14.917015 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:30:14.918574 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:30:14.920193 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:30:14.920901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:30:14.935187 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:30:14.951896 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:30:14.959231 systemd[1]: Stopped target network.target - Network. Jun 25 18:30:14.959995 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:30:14.961281 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:30:14.962720 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:30:14.964233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:30:14.964344 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:30:14.966433 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:30:14.967871 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:30:14.969035 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:30:14.970270 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:30:14.971740 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:30:14.973332 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:30:14.974634 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:30:14.976280 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:30:14.977751 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:30:14.979333 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:30:14.980363 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:30:14.980482 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:30:14.982540 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:30:14.983926 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:30:14.985469 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:30:14.988813 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:30:14.989727 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:30:14.989857 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:30:14.991953 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:30:14.992068 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:30:14.993655 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:30:14.994967 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:30:14.995847 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:30:14.997104 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:30:14.998614 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:30:15.000402 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:30:15.000519 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:30:15.001723 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:30:15.001857 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:30:15.003291 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:30:15.003449 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:30:15.004680 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:30:15.004842 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:30:15.016956 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:30:15.018460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:30:15.019564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:30:15.022189 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:30:15.023307 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:30:15.023447 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:30:15.025185 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:30:15.025278 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:30:15.031356 ignition[1000]: INFO : Ignition 2.19.0 Jun 25 18:30:15.031356 ignition[1000]: INFO : Stage: umount Jun 25 18:30:15.031356 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:30:15.031356 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:30:15.031356 ignition[1000]: INFO : umount: umount passed Jun 25 18:30:15.031356 ignition[1000]: INFO : Ignition finished successfully Jun 25 18:30:15.031331 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:30:15.031811 systemd-networkd[765]: eth0: DHCPv6 lease lost Jun 25 18:30:15.032261 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:30:15.032350 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:30:15.034408 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:30:15.034512 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:30:15.036165 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:30:15.036243 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:30:15.039625 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:30:15.039713 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:30:15.041262 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:30:15.041319 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:30:15.042207 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:30:15.042253 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:30:15.043369 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:30:15.043416 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:30:15.044830 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:30:15.044872 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:30:15.046572 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:30:15.046615 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:30:15.054880 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:30:15.055840 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:30:15.055898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:30:15.057278 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:30:15.057319 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:30:15.058717 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:30:15.058770 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:30:15.060121 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:30:15.060160 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:30:15.061653 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:30:15.070402 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:30:15.070496 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:30:15.073092 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:30:15.073211 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:30:15.074542 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:30:15.074599 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:30:15.075875 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:30:15.075902 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:30:15.077216 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:30:15.077255 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:30:15.082606 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:30:15.082653 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:30:15.085056 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:30:15.085106 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:30:15.094909 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:30:15.095757 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:30:15.095834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:30:15.097435 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:30:15.097479 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:30:15.100677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:30:15.100724 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:30:15.102686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:30:15.102730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:30:15.104992 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:30:15.105075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:30:15.106670 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:30:15.108289 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:30:15.109596 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:30:15.110401 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:30:15.110457 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:30:15.116238 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:30:15.122184 systemd[1]: Switching root. Jun 25 18:30:15.150475 systemd-journald[238]: Journal stopped Jun 25 18:30:15.807635 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jun 25 18:30:15.807692 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:30:15.807705 kernel: SELinux: policy capability open_perms=1 Jun 25 18:30:15.807714 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:30:15.807725 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:30:15.807738 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:30:15.807750 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:30:15.807775 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:30:15.807787 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:30:15.807796 kernel: audit: type=1403 audit(1719340215.303:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:30:15.807807 systemd[1]: Successfully loaded SELinux policy in 30.798ms. Jun 25 18:30:15.807819 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.205ms. Jun 25 18:30:15.807831 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:30:15.807842 systemd[1]: Detected virtualization kvm. Jun 25 18:30:15.807852 systemd[1]: Detected architecture arm64. Jun 25 18:30:15.807866 systemd[1]: Detected first boot. Jun 25 18:30:15.807876 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:30:15.807887 zram_generator::config[1046]: No configuration found. Jun 25 18:30:15.807898 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:30:15.807910 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:30:15.807921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:30:15.807932 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:30:15.807943 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:30:15.807956 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:30:15.807966 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:30:15.807977 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:30:15.807988 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:30:15.808003 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:30:15.808014 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:30:15.808024 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:30:15.808035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:30:15.808047 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:30:15.808059 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:30:15.808070 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:30:15.808081 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:30:15.808091 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:30:15.808102 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 25 18:30:15.808113 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:30:15.808124 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:30:15.808135 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:30:15.808148 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:30:15.808158 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:30:15.808169 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:30:15.808180 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:30:15.808190 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:30:15.808201 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:30:15.808211 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:30:15.808222 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:30:15.808234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:30:15.808244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:30:15.808255 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:30:15.808269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:30:15.808279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:30:15.808290 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:30:15.808304 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:30:15.808316 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:30:15.808326 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:30:15.808338 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:30:15.808350 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:30:15.808361 systemd[1]: Reached target machines.target - Containers. Jun 25 18:30:15.808371 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:30:15.808389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:30:15.808401 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:30:15.808412 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:30:15.808423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:30:15.808433 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:30:15.808446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:30:15.808456 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:30:15.808467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:30:15.808478 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:30:15.808489 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:30:15.808500 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:30:15.808511 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:30:15.808521 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:30:15.808533 kernel: fuse: init (API version 7.39) Jun 25 18:30:15.808542 kernel: loop: module loaded Jun 25 18:30:15.808552 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:30:15.808563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:30:15.808573 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:30:15.808583 kernel: ACPI: bus type drm_connector registered Jun 25 18:30:15.808593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:30:15.808603 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:30:15.808615 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:30:15.808627 systemd[1]: Stopped verity-setup.service. Jun 25 18:30:15.808680 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:30:15.808695 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:30:15.808706 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:30:15.808735 systemd-journald[1112]: Collecting audit messages is disabled. Jun 25 18:30:15.808756 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:30:15.808778 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:30:15.808793 systemd-journald[1112]: Journal started Jun 25 18:30:15.808813 systemd-journald[1112]: Runtime Journal (/run/log/journal/4c799e4c08ef42a6afc9dfde5588c5b8) is 5.9M, max 47.3M, 41.4M free. Jun 25 18:30:15.639862 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:30:15.656637 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:30:15.657021 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:30:15.811241 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:30:15.811828 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:30:15.813823 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:30:15.814940 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:30:15.816317 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:30:15.816476 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:30:15.817593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:30:15.817721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:30:15.819000 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:30:15.819149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:30:15.820323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:30:15.820472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:30:15.821727 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:30:15.821870 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:30:15.824951 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:30:15.825090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:30:15.826106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:30:15.827290 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:30:15.828576 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:30:15.840363 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:30:15.850867 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:30:15.852695 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:30:15.853549 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:30:15.853586 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:30:15.855416 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:30:15.857262 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:30:15.859086 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:30:15.860079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:30:15.861449 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:30:15.863106 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:30:15.863971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:30:15.864907 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:30:15.868912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:30:15.870037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:30:15.874655 systemd-journald[1112]: Time spent on flushing to /var/log/journal/4c799e4c08ef42a6afc9dfde5588c5b8 is 14.359ms for 857 entries. Jun 25 18:30:15.874655 systemd-journald[1112]: System Journal (/var/log/journal/4c799e4c08ef42a6afc9dfde5588c5b8) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:30:15.907611 systemd-journald[1112]: Received client request to flush runtime journal. Jun 25 18:30:15.907660 kernel: loop0: detected capacity change from 0 to 193208 Jun 25 18:30:15.907683 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:30:15.875254 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:30:15.878158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:30:15.882795 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:30:15.883837 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:30:15.884912 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:30:15.885953 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:30:15.887301 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:30:15.891574 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:30:15.900996 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:30:15.903555 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:30:15.911905 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:30:15.913978 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jun 25 18:30:15.913989 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jun 25 18:30:15.916017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:30:15.917709 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:30:15.923792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:30:15.938057 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:30:15.939644 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:30:15.940340 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:30:15.943717 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:30:15.960972 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:30:15.963805 kernel: loop1: detected capacity change from 0 to 113712 Jun 25 18:30:15.971945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:30:15.991324 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jun 25 18:30:15.991341 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jun 25 18:30:15.995653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:30:16.010889 kernel: loop2: detected capacity change from 0 to 59688 Jun 25 18:30:16.046839 kernel: loop3: detected capacity change from 0 to 193208 Jun 25 18:30:16.054800 kernel: loop4: detected capacity change from 0 to 113712 Jun 25 18:30:16.061485 kernel: loop5: detected capacity change from 0 to 59688 Jun 25 18:30:16.065623 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:30:16.066832 (sd-merge)[1185]: Merged extensions into '/usr'. Jun 25 18:30:16.070940 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:30:16.070957 systemd[1]: Reloading... Jun 25 18:30:16.132784 zram_generator::config[1209]: No configuration found. Jun 25 18:30:16.193798 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:30:16.223626 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:30:16.260520 systemd[1]: Reloading finished in 189 ms. Jun 25 18:30:16.294695 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:30:16.295935 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:30:16.307123 systemd[1]: Starting ensure-sysext.service... Jun 25 18:30:16.308864 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:30:16.320203 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:30:16.320220 systemd[1]: Reloading... Jun 25 18:30:16.328413 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:30:16.328676 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:30:16.329315 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:30:16.329553 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jun 25 18:30:16.329602 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jun 25 18:30:16.331821 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:30:16.331833 systemd-tmpfiles[1245]: Skipping /boot Jun 25 18:30:16.338294 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:30:16.338309 systemd-tmpfiles[1245]: Skipping /boot Jun 25 18:30:16.365788 zram_generator::config[1267]: No configuration found. Jun 25 18:30:16.446470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:30:16.483460 systemd[1]: Reloading finished in 162 ms. Jun 25 18:30:16.500797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:30:16.507173 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:30:16.517216 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:30:16.519492 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:30:16.521609 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:30:16.525008 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:30:16.530034 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:30:16.537997 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:30:16.541082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:30:16.550723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:30:16.552548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:30:16.557003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:30:16.559283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:30:16.560743 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:30:16.563805 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:30:16.566338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:30:16.566469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:30:16.567866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:30:16.567989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:30:16.572594 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:30:16.572740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:30:16.574724 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Jun 25 18:30:16.580564 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:30:16.584514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:30:16.594867 augenrules[1335]: No rules Jun 25 18:30:16.594995 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:30:16.597558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:30:16.600045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:30:16.604043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:30:16.605105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:30:16.607071 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:30:16.608562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:30:16.609984 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:30:16.611554 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:30:16.613159 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:30:16.616406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:30:16.616546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:30:16.618094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:30:16.618216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:30:16.624618 systemd[1]: Finished ensure-sysext.service. Jun 25 18:30:16.627311 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:30:16.638786 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1350) Jun 25 18:30:16.641654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:30:16.643010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:30:16.646591 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:30:16.647524 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:30:16.647908 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:30:16.648076 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:30:16.652175 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:30:16.652324 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:30:16.655323 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 18:30:16.655448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:30:16.689311 systemd-resolved[1310]: Positive Trust Anchors: Jun 25 18:30:16.689603 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:30:16.689638 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:30:16.692834 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1356) Jun 25 18:30:16.696303 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jun 25 18:30:16.698926 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:30:16.700020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:30:16.717790 systemd-networkd[1378]: lo: Link UP Jun 25 18:30:16.717799 systemd-networkd[1378]: lo: Gained carrier Jun 25 18:30:16.718489 systemd-networkd[1378]: Enumeration completed Jun 25 18:30:16.718592 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:30:16.719951 systemd[1]: Reached target network.target - Network. Jun 25 18:30:16.723395 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:30:16.723404 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:30:16.724449 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:30:16.724483 systemd-networkd[1378]: eth0: Link UP Jun 25 18:30:16.724486 systemd-networkd[1378]: eth0: Gained carrier Jun 25 18:30:16.724494 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:30:16.732040 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:30:16.732940 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:30:16.736579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:30:16.737548 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:30:16.739529 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:30:16.743834 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:30:16.744937 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Jun 25 18:30:16.746120 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:30:16.746173 systemd-timesyncd[1379]: Initial clock synchronization to Tue 2024-06-25 18:30:16.564060 UTC. Jun 25 18:30:16.764944 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:30:16.784028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:30:16.795396 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:30:16.799661 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:30:16.816065 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:30:16.829179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:30:16.849644 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:30:16.851416 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:30:16.852311 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:30:16.853154 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:30:16.854095 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:30:16.855362 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:30:16.856588 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:30:16.857604 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:30:16.858610 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:30:16.858643 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:30:16.859322 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:30:16.861221 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:30:16.863416 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:30:16.868609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:30:16.870532 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:30:16.871871 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:30:16.872789 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:30:16.873609 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:30:16.874539 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:30:16.874571 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:30:16.875461 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:30:16.877143 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:30:16.879902 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:30:16.880249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:30:16.888682 jq[1411]: false Jun 25 18:30:16.885979 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:30:16.886969 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:30:16.888027 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:30:16.891700 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:30:16.894835 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:30:16.900232 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:30:16.902881 extend-filesystems[1412]: Found loop3 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found loop4 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found loop5 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found vda Jun 25 18:30:16.902881 extend-filesystems[1412]: Found vda1 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found vda2 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found vda3 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found usr Jun 25 18:30:16.902881 extend-filesystems[1412]: Found vda4 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found vda6 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found vda7 Jun 25 18:30:16.902881 extend-filesystems[1412]: Found vda9 Jun 25 18:30:16.902881 extend-filesystems[1412]: Checking size of /dev/vda9 Jun 25 18:30:16.916901 dbus-daemon[1410]: [system] SELinux support is enabled Jun 25 18:30:16.906035 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:30:16.909274 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:30:16.910788 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:30:16.913947 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:30:16.917002 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:30:16.919744 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:30:16.922503 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:30:16.929116 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:30:16.929273 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:30:16.929526 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:30:16.929654 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:30:16.932699 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:30:16.932872 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:30:16.943924 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:30:16.943965 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:30:16.945065 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:30:16.945089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:30:16.952848 extend-filesystems[1412]: Resized partition /dev/vda9 Jun 25 18:30:16.953663 jq[1430]: true Jun 25 18:30:16.953853 extend-filesystems[1442]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:30:16.961589 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:30:16.970567 update_engine[1428]: I0625 18:30:16.970360 1428 main.cc:92] Flatcar Update Engine starting Jun 25 18:30:16.975023 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1354) Jun 25 18:30:16.975067 tar[1434]: linux-arm64/helm Jun 25 18:30:16.975135 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:30:16.978799 jq[1443]: true Jun 25 18:30:16.980609 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:30:16.982976 update_engine[1428]: I0625 18:30:16.982926 1428 update_check_scheduler.cc:74] Next update check in 3m30s Jun 25 18:30:16.983558 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:30:16.991782 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:30:17.004000 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:30:17.004000 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:30:17.004000 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:30:17.004830 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:30:17.011084 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Jun 25 18:30:17.005010 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:30:17.006272 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 18:30:17.006556 systemd-logind[1424]: New seat seat0. Jun 25 18:30:17.009215 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:30:17.035261 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:30:17.035507 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:30:17.037437 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:30:17.060715 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:30:17.169766 containerd[1444]: time="2024-06-25T18:30:17.167942564Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:30:17.193176 containerd[1444]: time="2024-06-25T18:30:17.193138441Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:30:17.193221 containerd[1444]: time="2024-06-25T18:30:17.193180111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:30:17.194504 containerd[1444]: time="2024-06-25T18:30:17.194473055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:30:17.194504 containerd[1444]: time="2024-06-25T18:30:17.194502920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:30:17.194715 containerd[1444]: time="2024-06-25T18:30:17.194691803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:30:17.194746 containerd[1444]: time="2024-06-25T18:30:17.194716195Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:30:17.194829 containerd[1444]: time="2024-06-25T18:30:17.194811692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:30:17.194887 containerd[1444]: time="2024-06-25T18:30:17.194870758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:30:17.194887 containerd[1444]: time="2024-06-25T18:30:17.194885534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:30:17.194959 containerd[1444]: time="2024-06-25T18:30:17.194942957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:30:17.195139 containerd[1444]: time="2024-06-25T18:30:17.195119957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:30:17.195163 containerd[1444]: time="2024-06-25T18:30:17.195142395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:30:17.195163 containerd[1444]: time="2024-06-25T18:30:17.195152793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:30:17.195259 containerd[1444]: time="2024-06-25T18:30:17.195240902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:30:17.195259 containerd[1444]: time="2024-06-25T18:30:17.195257398Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:30:17.195321 containerd[1444]: time="2024-06-25T18:30:17.195305322Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:30:17.195347 containerd[1444]: time="2024-06-25T18:30:17.195319864Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:30:17.198188 containerd[1444]: time="2024-06-25T18:30:17.198165193Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:30:17.198221 containerd[1444]: time="2024-06-25T18:30:17.198193885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:30:17.198221 containerd[1444]: time="2024-06-25T18:30:17.198206277Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:30:17.198266 containerd[1444]: time="2024-06-25T18:30:17.198233796Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:30:17.198266 containerd[1444]: time="2024-06-25T18:30:17.198247165Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:30:17.198266 containerd[1444]: time="2024-06-25T18:30:17.198257055Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:30:17.198314 containerd[1444]: time="2024-06-25T18:30:17.198268899Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:30:17.198401 containerd[1444]: time="2024-06-25T18:30:17.198379055Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:30:17.198424 containerd[1444]: time="2024-06-25T18:30:17.198402040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:30:17.198424 containerd[1444]: time="2024-06-25T18:30:17.198415174Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:30:17.198463 containerd[1444]: time="2024-06-25T18:30:17.198428230Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:30:17.198463 containerd[1444]: time="2024-06-25T18:30:17.198441834Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:30:17.198463 containerd[1444]: time="2024-06-25T18:30:17.198457157Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:30:17.198511 containerd[1444]: time="2024-06-25T18:30:17.198469783Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:30:17.198511 containerd[1444]: time="2024-06-25T18:30:17.198482761Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:30:17.198511 containerd[1444]: time="2024-06-25T18:30:17.198495817Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:30:17.198511 containerd[1444]: time="2024-06-25T18:30:17.198508052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:30:17.198600 containerd[1444]: time="2024-06-25T18:30:17.198519584Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:30:17.198600 containerd[1444]: time="2024-06-25T18:30:17.198530256Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:30:17.198633 containerd[1444]: time="2024-06-25T18:30:17.198618287Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:30:17.199990 containerd[1444]: time="2024-06-25T18:30:17.199963338Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:30:17.200023 containerd[1444]: time="2024-06-25T18:30:17.200012513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200045 containerd[1444]: time="2024-06-25T18:30:17.200027563Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:30:17.200064 containerd[1444]: time="2024-06-25T18:30:17.200050001Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:30:17.200182 containerd[1444]: time="2024-06-25T18:30:17.200168131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200280 containerd[1444]: time="2024-06-25T18:30:17.200261439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200304 containerd[1444]: time="2024-06-25T18:30:17.200282039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200304 containerd[1444]: time="2024-06-25T18:30:17.200295486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200338 containerd[1444]: time="2024-06-25T18:30:17.200308073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200338 containerd[1444]: time="2024-06-25T18:30:17.200321520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200338 containerd[1444]: time="2024-06-25T18:30:17.200333365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200387 containerd[1444]: time="2024-06-25T18:30:17.200345014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200387 containerd[1444]: time="2024-06-25T18:30:17.200357601Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:30:17.200581 containerd[1444]: time="2024-06-25T18:30:17.200558367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200606 containerd[1444]: time="2024-06-25T18:30:17.200585144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200606 containerd[1444]: time="2024-06-25T18:30:17.200598161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200640 containerd[1444]: time="2024-06-25T18:30:17.200611061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200640 containerd[1444]: time="2024-06-25T18:30:17.200623257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200640 containerd[1444]: time="2024-06-25T18:30:17.200636313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200692 containerd[1444]: time="2024-06-25T18:30:17.200649056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.200692 containerd[1444]: time="2024-06-25T18:30:17.200660002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:30:17.201233 containerd[1444]: time="2024-06-25T18:30:17.201129669Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:30:17.201335 containerd[1444]: time="2024-06-25T18:30:17.201236659Z" level=info msg="Connect containerd service" Jun 25 18:30:17.201335 containerd[1444]: time="2024-06-25T18:30:17.201266485Z" level=info msg="using legacy CRI server" Jun 25 18:30:17.201335 containerd[1444]: time="2024-06-25T18:30:17.201273443Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:30:17.202258 containerd[1444]: time="2024-06-25T18:30:17.201428748Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:30:17.202258 containerd[1444]: time="2024-06-25T18:30:17.202144370Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:30:17.202346 containerd[1444]: time="2024-06-25T18:30:17.202332628Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:30:17.202366 containerd[1444]: time="2024-06-25T18:30:17.202354636Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:30:17.202530 containerd[1444]: time="2024-06-25T18:30:17.202365346Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:30:17.202575 containerd[1444]: time="2024-06-25T18:30:17.202533825Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:30:17.203047 containerd[1444]: time="2024-06-25T18:30:17.202503178Z" level=info msg="Start subscribing containerd event" Jun 25 18:30:17.203047 containerd[1444]: time="2024-06-25T18:30:17.203020340Z" level=info msg="Start recovering state" Jun 25 18:30:17.203103 containerd[1444]: time="2024-06-25T18:30:17.203078194Z" level=info msg="Start event monitor" Jun 25 18:30:17.203103 containerd[1444]: time="2024-06-25T18:30:17.203089491Z" level=info msg="Start snapshots syncer" Jun 25 18:30:17.203103 containerd[1444]: time="2024-06-25T18:30:17.203097543Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:30:17.203103 containerd[1444]: time="2024-06-25T18:30:17.203103798Z" level=info msg="Start streaming server" Jun 25 18:30:17.203593 containerd[1444]: time="2024-06-25T18:30:17.203555640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:30:17.203631 containerd[1444]: time="2024-06-25T18:30:17.203618380Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:30:17.203795 containerd[1444]: time="2024-06-25T18:30:17.203776695Z" level=info msg="containerd successfully booted in 0.036705s" Jun 25 18:30:17.203853 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:30:17.330710 tar[1434]: linux-arm64/LICENSE Jun 25 18:30:17.330710 tar[1434]: linux-arm64/README.md Jun 25 18:30:17.348339 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:30:17.688544 sshd_keygen[1429]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:30:17.706627 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:30:17.717089 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:30:17.721991 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:30:17.722165 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:30:17.724501 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:30:17.735202 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:30:17.737464 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:30:17.739403 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 18:30:17.740393 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:30:18.360885 systemd-networkd[1378]: eth0: Gained IPv6LL Jun 25 18:30:18.363216 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:30:18.364689 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:30:18.375991 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:30:18.378017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:18.379721 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:30:18.393347 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:30:18.393531 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:30:18.395124 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:30:18.402845 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:30:18.842936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:18.844313 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:30:18.846989 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:30:18.850059 systemd[1]: Startup finished in 527ms (kernel) + 4.603s (initrd) + 3.580s (userspace) = 8.712s. Jun 25 18:30:19.307293 kubelet[1523]: E0625 18:30:19.307151 1523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:30:19.309639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:30:19.309787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:30:23.110420 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:30:23.111607 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:39228.service - OpenSSH per-connection server daemon (10.0.0.1:39228). Jun 25 18:30:23.163230 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 39228 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:30:23.166550 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:23.176162 systemd-logind[1424]: New session 1 of user core. Jun 25 18:30:23.177132 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:30:23.189999 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:30:23.199072 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:30:23.202114 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:30:23.207466 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:23.283981 systemd[1541]: Queued start job for default target default.target. Jun 25 18:30:23.295692 systemd[1541]: Created slice app.slice - User Application Slice. Jun 25 18:30:23.295725 systemd[1541]: Reached target paths.target - Paths. Jun 25 18:30:23.295737 systemd[1541]: Reached target timers.target - Timers. Jun 25 18:30:23.296924 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:30:23.306308 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:30:23.306371 systemd[1541]: Reached target sockets.target - Sockets. Jun 25 18:30:23.306383 systemd[1541]: Reached target basic.target - Basic System. Jun 25 18:30:23.306419 systemd[1541]: Reached target default.target - Main User Target. Jun 25 18:30:23.306444 systemd[1541]: Startup finished in 93ms. Jun 25 18:30:23.306743 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:30:23.307969 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:30:23.366221 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:39232.service - OpenSSH per-connection server daemon (10.0.0.1:39232). Jun 25 18:30:23.412372 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 39232 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:30:23.413494 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:23.417284 systemd-logind[1424]: New session 2 of user core. Jun 25 18:30:23.427958 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:30:23.479535 sshd[1552]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:23.489063 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:39232.service: Deactivated successfully. Jun 25 18:30:23.490323 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:30:23.492823 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:30:23.493865 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:39236.service - OpenSSH per-connection server daemon (10.0.0.1:39236). Jun 25 18:30:23.494573 systemd-logind[1424]: Removed session 2. Jun 25 18:30:23.525173 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 39236 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:30:23.526290 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:23.530092 systemd-logind[1424]: New session 3 of user core. Jun 25 18:30:23.538911 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:30:23.586814 sshd[1559]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:23.600022 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:39236.service: Deactivated successfully. Jun 25 18:30:23.601278 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:30:23.603895 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:30:23.604904 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:39252.service - OpenSSH per-connection server daemon (10.0.0.1:39252). Jun 25 18:30:23.605576 systemd-logind[1424]: Removed session 3. Jun 25 18:30:23.637508 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 39252 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:30:23.639592 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:23.643408 systemd-logind[1424]: New session 4 of user core. Jun 25 18:30:23.652911 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:30:23.703340 sshd[1566]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:23.711992 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:39252.service: Deactivated successfully. Jun 25 18:30:23.713536 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:30:23.715891 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:30:23.717534 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:39254.service - OpenSSH per-connection server daemon (10.0.0.1:39254). Jun 25 18:30:23.718635 systemd-logind[1424]: Removed session 4. Jun 25 18:30:23.748729 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 39254 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:30:23.749949 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:23.753713 systemd-logind[1424]: New session 5 of user core. Jun 25 18:30:23.767914 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:30:23.825182 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:30:23.825411 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:30:23.842502 sudo[1576]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:23.845874 sshd[1573]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:23.851989 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:39254.service: Deactivated successfully. Jun 25 18:30:23.855097 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:30:23.856321 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:30:23.868161 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:39260.service - OpenSSH per-connection server daemon (10.0.0.1:39260). Jun 25 18:30:23.868980 systemd-logind[1424]: Removed session 5. Jun 25 18:30:23.897479 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 39260 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:30:23.899156 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:23.902822 systemd-logind[1424]: New session 6 of user core. Jun 25 18:30:23.918949 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:30:23.968962 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:30:23.969293 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:30:23.972432 sudo[1585]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:23.977202 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:30:23.977434 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:30:23.994144 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:30:23.995202 auditctl[1588]: No rules Jun 25 18:30:23.995480 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:30:23.995659 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:30:23.997890 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:30:24.019330 augenrules[1606]: No rules Jun 25 18:30:24.019968 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:30:24.021071 sudo[1584]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:24.022493 sshd[1581]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:24.030976 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:39260.service: Deactivated successfully. Jun 25 18:30:24.032318 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:30:24.033491 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:30:24.040042 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:39268.service - OpenSSH per-connection server daemon (10.0.0.1:39268). Jun 25 18:30:24.040817 systemd-logind[1424]: Removed session 6. Jun 25 18:30:24.072570 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 39268 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:30:24.073768 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:30:24.078911 systemd-logind[1424]: New session 7 of user core. Jun 25 18:30:24.091945 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:30:24.143817 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:30:24.144133 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:30:24.259149 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:30:24.259307 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:30:24.495643 dockerd[1629]: time="2024-06-25T18:30:24.495262495Z" level=info msg="Starting up" Jun 25 18:30:24.617290 dockerd[1629]: time="2024-06-25T18:30:24.617161259Z" level=info msg="Loading containers: start." Jun 25 18:30:24.726615 kernel: Initializing XFRM netlink socket Jun 25 18:30:24.807709 systemd-networkd[1378]: docker0: Link UP Jun 25 18:30:24.835238 dockerd[1629]: time="2024-06-25T18:30:24.833343929Z" level=info msg="Loading containers: done." Jun 25 18:30:24.891406 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1771971836-merged.mount: Deactivated successfully. Jun 25 18:30:24.894956 dockerd[1629]: time="2024-06-25T18:30:24.894912708Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:30:24.895118 dockerd[1629]: time="2024-06-25T18:30:24.895099148Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:30:24.895229 dockerd[1629]: time="2024-06-25T18:30:24.895213279Z" level=info msg="Daemon has completed initialization" Jun 25 18:30:24.929035 dockerd[1629]: time="2024-06-25T18:30:24.928985835Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:30:24.929280 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:30:25.608574 containerd[1444]: time="2024-06-25T18:30:25.608527437Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:30:26.283115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272138389.mount: Deactivated successfully. Jun 25 18:30:27.849789 containerd[1444]: time="2024-06-25T18:30:27.849595424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:27.850791 containerd[1444]: time="2024-06-25T18:30:27.850564550Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jun 25 18:30:27.851431 containerd[1444]: time="2024-06-25T18:30:27.851394951Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:27.854927 containerd[1444]: time="2024-06-25T18:30:27.854893886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:27.855899 containerd[1444]: time="2024-06-25T18:30:27.855873389Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.247307043s" Jun 25 18:30:27.855953 containerd[1444]: time="2024-06-25T18:30:27.855904999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 18:30:27.873380 containerd[1444]: time="2024-06-25T18:30:27.873337688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:30:29.560219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:30:29.576035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:29.662923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:29.664170 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:30:29.707973 kubelet[1843]: E0625 18:30:29.707919 1843 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:30:29.714123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:30:29.714266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:30:30.072541 containerd[1444]: time="2024-06-25T18:30:30.072498455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:30.073115 containerd[1444]: time="2024-06-25T18:30:30.072919361Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jun 25 18:30:30.074584 containerd[1444]: time="2024-06-25T18:30:30.074543703Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:30.076973 containerd[1444]: time="2024-06-25T18:30:30.076944599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:30.078094 containerd[1444]: time="2024-06-25T18:30:30.078064969Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 2.204697403s" Jun 25 18:30:30.078174 containerd[1444]: time="2024-06-25T18:30:30.078097956Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 18:30:30.096351 containerd[1444]: time="2024-06-25T18:30:30.096153958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:30:31.145010 containerd[1444]: time="2024-06-25T18:30:31.144963792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.146025 containerd[1444]: time="2024-06-25T18:30:31.145545344Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jun 25 18:30:31.147499 containerd[1444]: time="2024-06-25T18:30:31.146907828Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.152224 containerd[1444]: time="2024-06-25T18:30:31.150234478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:31.152224 containerd[1444]: time="2024-06-25T18:30:31.151623707Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.055437874s" Jun 25 18:30:31.152224 containerd[1444]: time="2024-06-25T18:30:31.151656073Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 18:30:31.170672 containerd[1444]: time="2024-06-25T18:30:31.170639766Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:30:32.231034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747710787.mount: Deactivated successfully. Jun 25 18:30:32.411483 containerd[1444]: time="2024-06-25T18:30:32.411094747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:32.411852 containerd[1444]: time="2024-06-25T18:30:32.411545519Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jun 25 18:30:32.412906 containerd[1444]: time="2024-06-25T18:30:32.412529729Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:32.414835 containerd[1444]: time="2024-06-25T18:30:32.414803887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:32.415538 containerd[1444]: time="2024-06-25T18:30:32.415385098Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.244712484s" Jun 25 18:30:32.415538 containerd[1444]: time="2024-06-25T18:30:32.415413969Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 18:30:32.435657 containerd[1444]: time="2024-06-25T18:30:32.435434569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:30:32.858582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355464647.mount: Deactivated successfully. Jun 25 18:30:32.863261 containerd[1444]: time="2024-06-25T18:30:32.863213102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:32.864207 containerd[1444]: time="2024-06-25T18:30:32.864176257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jun 25 18:30:32.864933 containerd[1444]: time="2024-06-25T18:30:32.864892013Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:32.866989 containerd[1444]: time="2024-06-25T18:30:32.866953745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:32.867865 containerd[1444]: time="2024-06-25T18:30:32.867831124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 432.359948ms" Jun 25 18:30:32.867907 containerd[1444]: time="2024-06-25T18:30:32.867864102Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 18:30:32.887583 containerd[1444]: time="2024-06-25T18:30:32.887315933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:30:33.461738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079940258.mount: Deactivated successfully. Jun 25 18:30:35.346320 containerd[1444]: time="2024-06-25T18:30:35.345746746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:35.348453 containerd[1444]: time="2024-06-25T18:30:35.348124128Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jun 25 18:30:35.349902 containerd[1444]: time="2024-06-25T18:30:35.349817998Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:35.355169 containerd[1444]: time="2024-06-25T18:30:35.355085863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:35.355806 containerd[1444]: time="2024-06-25T18:30:35.355746422Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.468391761s" Jun 25 18:30:35.355806 containerd[1444]: time="2024-06-25T18:30:35.355798116Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 18:30:35.378817 containerd[1444]: time="2024-06-25T18:30:35.378788863Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:30:35.921424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351849699.mount: Deactivated successfully. Jun 25 18:30:36.287524 containerd[1444]: time="2024-06-25T18:30:36.287410515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:36.288933 containerd[1444]: time="2024-06-25T18:30:36.288879907Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jun 25 18:30:36.289820 containerd[1444]: time="2024-06-25T18:30:36.289771699Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:36.295117 containerd[1444]: time="2024-06-25T18:30:36.295075938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:30:36.296018 containerd[1444]: time="2024-06-25T18:30:36.295978871Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 917.152522ms" Jun 25 18:30:36.296063 containerd[1444]: time="2024-06-25T18:30:36.296017761Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 18:30:39.964555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:30:39.974271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:40.062160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:40.065559 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:30:40.104551 kubelet[2030]: E0625 18:30:40.104497 2030 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:30:40.106919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:30:40.107041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:30:41.777033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:41.787023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:41.802615 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-7.scope)... Jun 25 18:30:41.802633 systemd[1]: Reloading... Jun 25 18:30:41.871824 zram_generator::config[2080]: No configuration found. Jun 25 18:30:41.996827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:30:42.049790 systemd[1]: Reloading finished in 246 ms. Jun 25 18:30:42.097420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:42.100921 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:30:42.101151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:42.102668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:42.199723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:42.204539 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:30:42.244069 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:30:42.244069 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:30:42.244069 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:30:42.244929 kubelet[2130]: I0625 18:30:42.244871 2130 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:30:43.107402 kubelet[2130]: I0625 18:30:43.107360 2130 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:30:43.107402 kubelet[2130]: I0625 18:30:43.107391 2130 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:30:43.107599 kubelet[2130]: I0625 18:30:43.107586 2130 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:30:43.162661 kubelet[2130]: I0625 18:30:43.160243 2130 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:30:43.166586 kubelet[2130]: E0625 18:30:43.166515 2130 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.168746 kubelet[2130]: W0625 18:30:43.168719 2130 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 18:30:43.169495 kubelet[2130]: I0625 18:30:43.169477 2130 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:30:43.169691 kubelet[2130]: I0625 18:30:43.169681 2130 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:30:43.169879 kubelet[2130]: I0625 18:30:43.169865 2130 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:30:43.169962 kubelet[2130]: I0625 18:30:43.169890 2130 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:30:43.169962 kubelet[2130]: I0625 18:30:43.169900 2130 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:30:43.170136 kubelet[2130]: I0625 18:30:43.170122 2130 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:30:43.171944 kubelet[2130]: W0625 18:30:43.171899 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.172068 kubelet[2130]: E0625 18:30:43.172049 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.172503 kubelet[2130]: I0625 18:30:43.172465 2130 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:30:43.172503 kubelet[2130]: I0625 18:30:43.172497 2130 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:30:43.172653 kubelet[2130]: I0625 18:30:43.172581 2130 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:30:43.172653 kubelet[2130]: I0625 18:30:43.172602 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:30:43.173751 kubelet[2130]: W0625 18:30:43.173703 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.173751 kubelet[2130]: E0625 18:30:43.173741 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.174398 kubelet[2130]: I0625 18:30:43.173996 2130 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:30:43.177637 kubelet[2130]: W0625 18:30:43.177592 2130 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:30:43.178314 kubelet[2130]: I0625 18:30:43.178266 2130 server.go:1232] "Started kubelet" Jun 25 18:30:43.178780 kubelet[2130]: I0625 18:30:43.178548 2130 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:30:43.179577 kubelet[2130]: I0625 18:30:43.179539 2130 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:30:43.179818 kubelet[2130]: I0625 18:30:43.179789 2130 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:30:43.180085 kubelet[2130]: I0625 18:30:43.179795 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:30:43.180298 kubelet[2130]: I0625 18:30:43.180265 2130 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:30:43.182967 kubelet[2130]: I0625 18:30:43.182945 2130 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:30:43.183778 kubelet[2130]: I0625 18:30:43.183748 2130 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:30:43.186194 kubelet[2130]: I0625 18:30:43.184850 2130 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:30:43.186194 kubelet[2130]: W0625 18:30:43.184114 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.186194 kubelet[2130]: E0625 18:30:43.184899 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.186194 kubelet[2130]: E0625 18:30:43.184186 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Jun 25 18:30:43.186373 kubelet[2130]: E0625 18:30:43.185458 2130 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc52d77aac4792", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 30, 43, 178243986, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 30, 43, 178243986, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.73:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.73:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:30:43.187547 kubelet[2130]: E0625 18:30:43.187520 2130 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:30:43.187724 kubelet[2130]: E0625 18:30:43.187710 2130 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:30:43.197248 kubelet[2130]: I0625 18:30:43.197224 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:30:43.198599 kubelet[2130]: I0625 18:30:43.198580 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:30:43.198681 kubelet[2130]: I0625 18:30:43.198671 2130 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:30:43.198777 kubelet[2130]: I0625 18:30:43.198747 2130 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:30:43.199088 kubelet[2130]: E0625 18:30:43.199059 2130 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:30:43.201121 kubelet[2130]: W0625 18:30:43.201051 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.201121 kubelet[2130]: E0625 18:30:43.201107 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:43.206964 kubelet[2130]: I0625 18:30:43.206506 2130 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:30:43.206964 kubelet[2130]: I0625 18:30:43.206527 2130 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:30:43.206964 kubelet[2130]: I0625 18:30:43.206552 2130 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:30:43.280260 kubelet[2130]: I0625 18:30:43.279891 2130 policy_none.go:49] "None policy: Start" Jun 25 18:30:43.281059 kubelet[2130]: I0625 18:30:43.281015 2130 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:30:43.281059 kubelet[2130]: I0625 18:30:43.281042 2130 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:30:43.283773 kubelet[2130]: I0625 18:30:43.283650 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:30:43.284243 kubelet[2130]: E0625 18:30:43.284202 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Jun 25 18:30:43.286918 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:30:43.299846 kubelet[2130]: E0625 18:30:43.299802 2130 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:30:43.300305 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:30:43.302769 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:30:43.313972 kubelet[2130]: I0625 18:30:43.313454 2130 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:30:43.313972 kubelet[2130]: I0625 18:30:43.313714 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:30:43.315325 kubelet[2130]: E0625 18:30:43.315276 2130 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:30:43.386484 kubelet[2130]: E0625 18:30:43.386351 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Jun 25 18:30:43.485797 kubelet[2130]: I0625 18:30:43.485522 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:30:43.485917 kubelet[2130]: E0625 18:30:43.485851 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Jun 25 18:30:43.500241 kubelet[2130]: I0625 18:30:43.500175 2130 topology_manager.go:215] "Topology Admit Handler" podUID="447025d2e3507eeda66fc658f3f30cbd" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:30:43.501309 kubelet[2130]: I0625 18:30:43.501262 2130 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:30:43.502205 kubelet[2130]: I0625 18:30:43.502180 2130 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:30:43.507803 systemd[1]: Created slice kubepods-burstable-pod447025d2e3507eeda66fc658f3f30cbd.slice - libcontainer container kubepods-burstable-pod447025d2e3507eeda66fc658f3f30cbd.slice. Jun 25 18:30:43.525985 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jun 25 18:30:43.529661 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jun 25 18:30:43.587039 kubelet[2130]: I0625 18:30:43.586690 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:30:43.587039 kubelet[2130]: I0625 18:30:43.586746 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/447025d2e3507eeda66fc658f3f30cbd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"447025d2e3507eeda66fc658f3f30cbd\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:30:43.587039 kubelet[2130]: I0625 18:30:43.586788 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/447025d2e3507eeda66fc658f3f30cbd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"447025d2e3507eeda66fc658f3f30cbd\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:30:43.587039 kubelet[2130]: I0625 18:30:43.586813 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:43.587039 kubelet[2130]: I0625 18:30:43.586839 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:43.587246 kubelet[2130]: I0625 18:30:43.586860 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/447025d2e3507eeda66fc658f3f30cbd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"447025d2e3507eeda66fc658f3f30cbd\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:30:43.589628 kubelet[2130]: I0625 18:30:43.589384 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:43.589628 kubelet[2130]: I0625 18:30:43.589464 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:43.589628 kubelet[2130]: I0625 18:30:43.589526 2130 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:43.787641 kubelet[2130]: E0625 18:30:43.787526 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Jun 25 18:30:43.825941 kubelet[2130]: E0625 18:30:43.825893 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:43.828941 kubelet[2130]: E0625 18:30:43.828585 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:43.835054 kubelet[2130]: E0625 18:30:43.835025 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:43.835634 containerd[1444]: time="2024-06-25T18:30:43.835601323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:447025d2e3507eeda66fc658f3f30cbd,Namespace:kube-system,Attempt:0,}" Jun 25 18:30:43.836422 containerd[1444]: time="2024-06-25T18:30:43.835622228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 18:30:43.836422 containerd[1444]: time="2024-06-25T18:30:43.836148056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 18:30:43.887468 kubelet[2130]: I0625 18:30:43.887408 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:30:43.887740 kubelet[2130]: E0625 18:30:43.887716 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Jun 25 18:30:44.065847 kubelet[2130]: W0625 18:30:44.065708 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:44.065847 kubelet[2130]: E0625 18:30:44.065799 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:44.127531 kubelet[2130]: W0625 18:30:44.127478 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:44.127531 kubelet[2130]: E0625 18:30:44.127531 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:44.147896 kubelet[2130]: W0625 18:30:44.147814 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:44.147896 kubelet[2130]: E0625 18:30:44.147862 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:44.298530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112488162.mount: Deactivated successfully. Jun 25 18:30:44.301303 containerd[1444]: time="2024-06-25T18:30:44.301245623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:44.302551 containerd[1444]: time="2024-06-25T18:30:44.302506363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jun 25 18:30:44.304585 containerd[1444]: time="2024-06-25T18:30:44.304551778Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:44.305831 containerd[1444]: time="2024-06-25T18:30:44.305794569Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:44.306296 containerd[1444]: time="2024-06-25T18:30:44.306244610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:30:44.307116 containerd[1444]: time="2024-06-25T18:30:44.307082772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:30:44.307439 containerd[1444]: time="2024-06-25T18:30:44.307407011Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:44.310448 containerd[1444]: time="2024-06-25T18:30:44.310414031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:30:44.311668 containerd[1444]: time="2024-06-25T18:30:44.311631278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 475.309224ms" Jun 25 18:30:44.312395 containerd[1444]: time="2024-06-25T18:30:44.312297306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 475.873324ms" Jun 25 18:30:44.314935 containerd[1444]: time="2024-06-25T18:30:44.314902734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.66578ms" Jun 25 18:30:44.479920 containerd[1444]: time="2024-06-25T18:30:44.478228059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:30:44.479920 containerd[1444]: time="2024-06-25T18:30:44.478285024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:44.479920 containerd[1444]: time="2024-06-25T18:30:44.478298935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:30:44.479920 containerd[1444]: time="2024-06-25T18:30:44.478308609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:44.480777 containerd[1444]: time="2024-06-25T18:30:44.479583181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:30:44.480777 containerd[1444]: time="2024-06-25T18:30:44.479644943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:44.480777 containerd[1444]: time="2024-06-25T18:30:44.479663971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:30:44.480777 containerd[1444]: time="2024-06-25T18:30:44.479677363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:44.483279 containerd[1444]: time="2024-06-25T18:30:44.483202382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:30:44.483279 containerd[1444]: time="2024-06-25T18:30:44.483255109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:44.483383 containerd[1444]: time="2024-06-25T18:30:44.483273898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:30:44.484886 containerd[1444]: time="2024-06-25T18:30:44.484820101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:30:44.500926 systemd[1]: Started cri-containerd-c25b6dd4e1e285b9badec24beee3f428ec72d2acdd04a494303cd2604fa1caf3.scope - libcontainer container c25b6dd4e1e285b9badec24beee3f428ec72d2acdd04a494303cd2604fa1caf3. Jun 25 18:30:44.504421 systemd[1]: Started cri-containerd-79abdaf68bde1a4c4fafdfb0a41fb85ad6507cec8e0a6c6ebcaec7363c92524f.scope - libcontainer container 79abdaf68bde1a4c4fafdfb0a41fb85ad6507cec8e0a6c6ebcaec7363c92524f. Jun 25 18:30:44.505704 systemd[1]: Started cri-containerd-b1698a3693b27d46a611216b3da0ae8e37bad6da9ddcfda286623844fd4304fa.scope - libcontainer container b1698a3693b27d46a611216b3da0ae8e37bad6da9ddcfda286623844fd4304fa. Jun 25 18:30:44.532399 containerd[1444]: time="2024-06-25T18:30:44.532346061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c25b6dd4e1e285b9badec24beee3f428ec72d2acdd04a494303cd2604fa1caf3\"" Jun 25 18:30:44.535499 kubelet[2130]: E0625 18:30:44.535415 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:44.539507 containerd[1444]: time="2024-06-25T18:30:44.539457422Z" level=info msg="CreateContainer within sandbox \"c25b6dd4e1e285b9badec24beee3f428ec72d2acdd04a494303cd2604fa1caf3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:30:44.541359 containerd[1444]: time="2024-06-25T18:30:44.540872027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:447025d2e3507eeda66fc658f3f30cbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"79abdaf68bde1a4c4fafdfb0a41fb85ad6507cec8e0a6c6ebcaec7363c92524f\"" Jun 25 18:30:44.542125 kubelet[2130]: E0625 18:30:44.542101 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:44.545814 containerd[1444]: time="2024-06-25T18:30:44.544991319Z" level=info msg="CreateContainer within sandbox \"79abdaf68bde1a4c4fafdfb0a41fb85ad6507cec8e0a6c6ebcaec7363c92524f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:30:44.548553 containerd[1444]: time="2024-06-25T18:30:44.548513380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1698a3693b27d46a611216b3da0ae8e37bad6da9ddcfda286623844fd4304fa\"" Jun 25 18:30:44.549058 kubelet[2130]: E0625 18:30:44.549042 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:44.551501 containerd[1444]: time="2024-06-25T18:30:44.551471470Z" level=info msg="CreateContainer within sandbox \"b1698a3693b27d46a611216b3da0ae8e37bad6da9ddcfda286623844fd4304fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:30:44.554842 containerd[1444]: time="2024-06-25T18:30:44.554743566Z" level=info msg="CreateContainer within sandbox \"c25b6dd4e1e285b9badec24beee3f428ec72d2acdd04a494303cd2604fa1caf3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a09732e256f0abf1cd6b900bc1d47482f7dcd2c6b4df60b86071803f50004932\"" Jun 25 18:30:44.555709 containerd[1444]: time="2024-06-25T18:30:44.555485827Z" level=info msg="StartContainer for \"a09732e256f0abf1cd6b900bc1d47482f7dcd2c6b4df60b86071803f50004932\"" Jun 25 18:30:44.560935 containerd[1444]: time="2024-06-25T18:30:44.560897879Z" level=info msg="CreateContainer within sandbox \"79abdaf68bde1a4c4fafdfb0a41fb85ad6507cec8e0a6c6ebcaec7363c92524f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f06edecb0310ef8048e73adc09a409458732337370a7172d64f7f4616c7bfdca\"" Jun 25 18:30:44.561479 containerd[1444]: time="2024-06-25T18:30:44.561421075Z" level=info msg="StartContainer for \"f06edecb0310ef8048e73adc09a409458732337370a7172d64f7f4616c7bfdca\"" Jun 25 18:30:44.569358 containerd[1444]: time="2024-06-25T18:30:44.568753259Z" level=info msg="CreateContainer within sandbox \"b1698a3693b27d46a611216b3da0ae8e37bad6da9ddcfda286623844fd4304fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a82aff2762429597982c600fc4b4f3cd18655c2513c8b96273c3a34f2595810\"" Jun 25 18:30:44.570484 containerd[1444]: time="2024-06-25T18:30:44.570447771Z" level=info msg="StartContainer for \"6a82aff2762429597982c600fc4b4f3cd18655c2513c8b96273c3a34f2595810\"" Jun 25 18:30:44.579310 systemd[1]: Started cri-containerd-a09732e256f0abf1cd6b900bc1d47482f7dcd2c6b4df60b86071803f50004932.scope - libcontainer container a09732e256f0abf1cd6b900bc1d47482f7dcd2c6b4df60b86071803f50004932. Jun 25 18:30:44.583279 systemd[1]: Started cri-containerd-f06edecb0310ef8048e73adc09a409458732337370a7172d64f7f4616c7bfdca.scope - libcontainer container f06edecb0310ef8048e73adc09a409458732337370a7172d64f7f4616c7bfdca. Jun 25 18:30:44.590367 kubelet[2130]: E0625 18:30:44.588914 2130 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="1.6s" Jun 25 18:30:44.608151 systemd[1]: Started cri-containerd-6a82aff2762429597982c600fc4b4f3cd18655c2513c8b96273c3a34f2595810.scope - libcontainer container 6a82aff2762429597982c600fc4b4f3cd18655c2513c8b96273c3a34f2595810. Jun 25 18:30:44.634752 containerd[1444]: time="2024-06-25T18:30:44.634705820Z" level=info msg="StartContainer for \"a09732e256f0abf1cd6b900bc1d47482f7dcd2c6b4df60b86071803f50004932\" returns successfully" Jun 25 18:30:44.674054 containerd[1444]: time="2024-06-25T18:30:44.673993956Z" level=info msg="StartContainer for \"6a82aff2762429597982c600fc4b4f3cd18655c2513c8b96273c3a34f2595810\" returns successfully" Jun 25 18:30:44.674166 containerd[1444]: time="2024-06-25T18:30:44.674004709Z" level=info msg="StartContainer for \"f06edecb0310ef8048e73adc09a409458732337370a7172d64f7f4616c7bfdca\" returns successfully" Jun 25 18:30:44.698749 kubelet[2130]: I0625 18:30:44.694056 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:30:44.698749 kubelet[2130]: E0625 18:30:44.694376 2130 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Jun 25 18:30:44.717796 kubelet[2130]: W0625 18:30:44.717220 2130 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:44.717796 kubelet[2130]: E0625 18:30:44.717289 2130 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jun 25 18:30:45.211273 kubelet[2130]: E0625 18:30:45.211052 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:45.213215 kubelet[2130]: E0625 18:30:45.213194 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:45.213729 kubelet[2130]: E0625 18:30:45.213709 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:46.215519 kubelet[2130]: E0625 18:30:46.215488 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:46.301112 kubelet[2130]: I0625 18:30:46.300782 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:30:46.323358 kubelet[2130]: E0625 18:30:46.323317 2130 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:30:46.441314 kubelet[2130]: I0625 18:30:46.440834 2130 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 18:30:47.174246 kubelet[2130]: I0625 18:30:47.174178 2130 apiserver.go:52] "Watching apiserver" Jun 25 18:30:47.185874 kubelet[2130]: I0625 18:30:47.185827 2130 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:30:48.012620 kubelet[2130]: E0625 18:30:48.012542 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:48.217102 kubelet[2130]: E0625 18:30:48.217076 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:48.811594 systemd[1]: Reloading requested from client PID 2405 ('systemctl') (unit session-7.scope)... Jun 25 18:30:48.811610 systemd[1]: Reloading... Jun 25 18:30:48.883793 zram_generator::config[2445]: No configuration found. Jun 25 18:30:48.959175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:30:49.023645 systemd[1]: Reloading finished in 211 ms. Jun 25 18:30:49.059285 kubelet[2130]: I0625 18:30:49.059251 2130 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:30:49.059542 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:49.074608 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:30:49.074893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:49.075022 systemd[1]: kubelet.service: Consumed 1.378s CPU time, 116.3M memory peak, 0B memory swap peak. Jun 25 18:30:49.089061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:30:49.177072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:30:49.182322 (kubelet)[2484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:30:49.236918 kubelet[2484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:30:49.236918 kubelet[2484]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:30:49.236918 kubelet[2484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:30:49.237200 kubelet[2484]: I0625 18:30:49.236965 2484 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:30:49.241013 sudo[2497]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 18:30:49.241229 sudo[2497]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 18:30:49.242299 kubelet[2484]: I0625 18:30:49.242259 2484 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:30:49.242299 kubelet[2484]: I0625 18:30:49.242294 2484 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:30:49.242472 kubelet[2484]: I0625 18:30:49.242456 2484 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:30:49.243890 kubelet[2484]: I0625 18:30:49.243874 2484 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:30:49.244868 kubelet[2484]: I0625 18:30:49.244756 2484 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:30:49.250061 kubelet[2484]: W0625 18:30:49.250046 2484 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 18:30:49.250750 kubelet[2484]: I0625 18:30:49.250725 2484 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:30:49.250971 kubelet[2484]: I0625 18:30:49.250960 2484 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:30:49.251119 kubelet[2484]: I0625 18:30:49.251106 2484 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:30:49.251188 kubelet[2484]: I0625 18:30:49.251133 2484 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:30:49.251188 kubelet[2484]: I0625 18:30:49.251141 2484 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:30:49.251188 kubelet[2484]: I0625 18:30:49.251174 2484 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:30:49.251259 kubelet[2484]: I0625 18:30:49.251253 2484 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:30:49.251289 kubelet[2484]: I0625 18:30:49.251264 2484 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:30:49.251315 kubelet[2484]: I0625 18:30:49.251295 2484 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:30:49.251315 kubelet[2484]: I0625 18:30:49.251306 2484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:30:49.253040 kubelet[2484]: I0625 18:30:49.253020 2484 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:30:49.253633 kubelet[2484]: I0625 18:30:49.253609 2484 server.go:1232] "Started kubelet" Jun 25 18:30:49.254060 kubelet[2484]: I0625 18:30:49.254041 2484 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:30:49.254978 kubelet[2484]: I0625 18:30:49.254958 2484 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:30:49.255181 kubelet[2484]: I0625 18:30:49.255165 2484 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:30:49.269539 kubelet[2484]: I0625 18:30:49.264594 2484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:30:49.270025 kubelet[2484]: I0625 18:30:49.269992 2484 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:30:49.270025 kubelet[2484]: I0625 18:30:49.270016 2484 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:30:49.270207 kubelet[2484]: I0625 18:30:49.270182 2484 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:30:49.270403 kubelet[2484]: I0625 18:30:49.270383 2484 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:30:49.272432 kubelet[2484]: E0625 18:30:49.272406 2484 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:30:49.272432 kubelet[2484]: E0625 18:30:49.272434 2484 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:30:49.279448 kubelet[2484]: I0625 18:30:49.278373 2484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:30:49.283053 kubelet[2484]: I0625 18:30:49.282189 2484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:30:49.283182 kubelet[2484]: I0625 18:30:49.283166 2484 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:30:49.283424 kubelet[2484]: I0625 18:30:49.283287 2484 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:30:49.284869 kubelet[2484]: E0625 18:30:49.284847 2484 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:30:49.339899 kubelet[2484]: I0625 18:30:49.339751 2484 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:30:49.340319 kubelet[2484]: I0625 18:30:49.340028 2484 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:30:49.340319 kubelet[2484]: I0625 18:30:49.340054 2484 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:30:49.340319 kubelet[2484]: I0625 18:30:49.340211 2484 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:30:49.340319 kubelet[2484]: I0625 18:30:49.340230 2484 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:30:49.340319 kubelet[2484]: I0625 18:30:49.340237 2484 policy_none.go:49] "None policy: Start" Jun 25 18:30:49.340955 kubelet[2484]: I0625 18:30:49.340885 2484 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:30:49.340955 kubelet[2484]: I0625 18:30:49.340913 2484 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:30:49.341134 kubelet[2484]: I0625 18:30:49.341068 2484 state_mem.go:75] "Updated machine memory state" Jun 25 18:30:49.345017 kubelet[2484]: I0625 18:30:49.344994 2484 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:30:49.345017 kubelet[2484]: I0625 18:30:49.345217 2484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:30:49.373262 kubelet[2484]: I0625 18:30:49.373221 2484 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:30:49.379146 kubelet[2484]: I0625 18:30:49.379111 2484 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 18:30:49.379230 kubelet[2484]: I0625 18:30:49.379176 2484 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 18:30:49.385661 kubelet[2484]: I0625 18:30:49.385482 2484 topology_manager.go:215] "Topology Admit Handler" podUID="447025d2e3507eeda66fc658f3f30cbd" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:30:49.386656 kubelet[2484]: I0625 18:30:49.386593 2484 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:30:49.387407 kubelet[2484]: I0625 18:30:49.387382 2484 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:30:49.395349 kubelet[2484]: E0625 18:30:49.395225 2484 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:49.471818 kubelet[2484]: I0625 18:30:49.471716 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:49.471818 kubelet[2484]: I0625 18:30:49.471773 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:49.471818 kubelet[2484]: I0625 18:30:49.471796 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:49.471818 kubelet[2484]: I0625 18:30:49.471817 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/447025d2e3507eeda66fc658f3f30cbd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"447025d2e3507eeda66fc658f3f30cbd\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:30:49.472025 kubelet[2484]: I0625 18:30:49.471842 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:49.472025 kubelet[2484]: I0625 18:30:49.471862 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/447025d2e3507eeda66fc658f3f30cbd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"447025d2e3507eeda66fc658f3f30cbd\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:30:49.472025 kubelet[2484]: I0625 18:30:49.471880 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:49.472025 kubelet[2484]: I0625 18:30:49.471898 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:30:49.472025 kubelet[2484]: I0625 18:30:49.471920 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/447025d2e3507eeda66fc658f3f30cbd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"447025d2e3507eeda66fc658f3f30cbd\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:30:49.681796 sudo[2497]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:49.695474 kubelet[2484]: E0625 18:30:49.695158 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:49.695474 kubelet[2484]: E0625 18:30:49.695369 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:49.695951 kubelet[2484]: E0625 18:30:49.695858 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:50.252437 kubelet[2484]: I0625 18:30:50.252403 2484 apiserver.go:52] "Watching apiserver" Jun 25 18:30:50.270517 kubelet[2484]: I0625 18:30:50.270487 2484 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:30:50.308243 kubelet[2484]: E0625 18:30:50.308151 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:50.317350 kubelet[2484]: E0625 18:30:50.317324 2484 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 25 18:30:50.317790 kubelet[2484]: E0625 18:30:50.317590 2484 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:30:50.320934 kubelet[2484]: E0625 18:30:50.320912 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:50.321077 kubelet[2484]: E0625 18:30:50.321063 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:50.343016 kubelet[2484]: I0625 18:30:50.342336 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.342211698 podCreationTimestamp="2024-06-25 18:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:30:50.335654553 +0000 UTC m=+1.147017103" watchObservedRunningTime="2024-06-25 18:30:50.342211698 +0000 UTC m=+1.153574248" Jun 25 18:30:50.343131 kubelet[2484]: I0625 18:30:50.343093 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.343066621 podCreationTimestamp="2024-06-25 18:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:30:50.342119337 +0000 UTC m=+1.153481887" watchObservedRunningTime="2024-06-25 18:30:50.343066621 +0000 UTC m=+1.154429171" Jun 25 18:30:50.359036 kubelet[2484]: I0625 18:30:50.359012 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.358985721 podCreationTimestamp="2024-06-25 18:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:30:50.351391532 +0000 UTC m=+1.162754082" watchObservedRunningTime="2024-06-25 18:30:50.358985721 +0000 UTC m=+1.170348271" Jun 25 18:30:51.313058 kubelet[2484]: E0625 18:30:51.310374 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:51.313058 kubelet[2484]: E0625 18:30:51.310534 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:51.311939 sudo[1618]: pam_unix(sudo:session): session closed for user root Jun 25 18:30:51.314357 sshd[1614]: pam_unix(sshd:session): session closed for user core Jun 25 18:30:51.317044 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:39268.service: Deactivated successfully. Jun 25 18:30:51.318920 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:30:51.319117 systemd[1]: session-7.scope: Consumed 7.705s CPU time, 136.4M memory peak, 0B memory swap peak. Jun 25 18:30:51.320331 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:30:51.321438 systemd-logind[1424]: Removed session 7. Jun 25 18:30:51.350145 kubelet[2484]: E0625 18:30:51.350074 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:57.816669 kubelet[2484]: E0625 18:30:57.816629 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:58.322244 kubelet[2484]: E0625 18:30:58.322213 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:30:59.337367 kubelet[2484]: E0625 18:30:59.337333 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:00.325359 kubelet[2484]: E0625 18:31:00.325282 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:01.357512 kubelet[2484]: E0625 18:31:01.357482 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:02.145403 update_engine[1428]: I0625 18:31:02.145010 1428 update_attempter.cc:509] Updating boot flags... Jun 25 18:31:02.175801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2571) Jun 25 18:31:02.202785 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2571) Jun 25 18:31:03.185095 kubelet[2484]: I0625 18:31:03.184436 2484 topology_manager.go:215] "Topology Admit Handler" podUID="998dc867-f3c8-4fa0-9cc6-74019186fe00" podNamespace="kube-system" podName="kube-proxy-plm72" Jun 25 18:31:03.191591 kubelet[2484]: I0625 18:31:03.191561 2484 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:31:03.192859 containerd[1444]: time="2024-06-25T18:31:03.191909662Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:31:03.193154 kubelet[2484]: I0625 18:31:03.192320 2484 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:31:03.196543 systemd[1]: Created slice kubepods-besteffort-pod998dc867_f3c8_4fa0_9cc6_74019186fe00.slice - libcontainer container kubepods-besteffort-pod998dc867_f3c8_4fa0_9cc6_74019186fe00.slice. Jun 25 18:31:03.200938 kubelet[2484]: I0625 18:31:03.200574 2484 topology_manager.go:215] "Topology Admit Handler" podUID="1671219b-d50a-40ca-b58b-ad54be33e035" podNamespace="kube-system" podName="cilium-9xrtj" Jun 25 18:31:03.215830 systemd[1]: Created slice kubepods-burstable-pod1671219b_d50a_40ca_b58b_ad54be33e035.slice - libcontainer container kubepods-burstable-pod1671219b_d50a_40ca_b58b_ad54be33e035.slice. Jun 25 18:31:03.262861 kubelet[2484]: I0625 18:31:03.262803 2484 topology_manager.go:215] "Topology Admit Handler" podUID="dec21266-f02e-4653-86db-ab1e4352f453" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-d42fk" Jun 25 18:31:03.270608 systemd[1]: Created slice kubepods-besteffort-poddec21266_f02e_4653_86db_ab1e4352f453.slice - libcontainer container kubepods-besteffort-poddec21266_f02e_4653_86db_ab1e4352f453.slice. Jun 25 18:31:03.366746 kubelet[2484]: I0625 18:31:03.366592 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-hostproc\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.366746 kubelet[2484]: I0625 18:31:03.366639 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-bpf-maps\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.366746 kubelet[2484]: I0625 18:31:03.366662 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n5zs\" (UniqueName: \"kubernetes.io/projected/dec21266-f02e-4653-86db-ab1e4352f453-kube-api-access-5n5zs\") pod \"cilium-operator-6bc8ccdb58-d42fk\" (UID: \"dec21266-f02e-4653-86db-ab1e4352f453\") " pod="kube-system/cilium-operator-6bc8ccdb58-d42fk" Jun 25 18:31:03.366746 kubelet[2484]: I0625 18:31:03.366684 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhs4\" (UniqueName: \"kubernetes.io/projected/998dc867-f3c8-4fa0-9cc6-74019186fe00-kube-api-access-thhs4\") pod \"kube-proxy-plm72\" (UID: \"998dc867-f3c8-4fa0-9cc6-74019186fe00\") " pod="kube-system/kube-proxy-plm72" Jun 25 18:31:03.366746 kubelet[2484]: I0625 18:31:03.366705 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dec21266-f02e-4653-86db-ab1e4352f453-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-d42fk\" (UID: \"dec21266-f02e-4653-86db-ab1e4352f453\") " pod="kube-system/cilium-operator-6bc8ccdb58-d42fk" Jun 25 18:31:03.367032 kubelet[2484]: I0625 18:31:03.366726 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/998dc867-f3c8-4fa0-9cc6-74019186fe00-xtables-lock\") pod \"kube-proxy-plm72\" (UID: \"998dc867-f3c8-4fa0-9cc6-74019186fe00\") " pod="kube-system/kube-proxy-plm72" Jun 25 18:31:03.367032 kubelet[2484]: I0625 18:31:03.366789 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-run\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367032 kubelet[2484]: I0625 18:31:03.366833 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-etc-cni-netd\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367032 kubelet[2484]: I0625 18:31:03.366853 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-lib-modules\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367032 kubelet[2484]: I0625 18:31:03.366878 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-host-proc-sys-kernel\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367032 kubelet[2484]: I0625 18:31:03.366898 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-cgroup\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367155 kubelet[2484]: I0625 18:31:03.366924 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cni-path\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367155 kubelet[2484]: I0625 18:31:03.366942 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-xtables-lock\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367155 kubelet[2484]: I0625 18:31:03.366966 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1671219b-d50a-40ca-b58b-ad54be33e035-hubble-tls\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367155 kubelet[2484]: I0625 18:31:03.366985 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45g8j\" (UniqueName: \"kubernetes.io/projected/1671219b-d50a-40ca-b58b-ad54be33e035-kube-api-access-45g8j\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367155 kubelet[2484]: I0625 18:31:03.367006 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/998dc867-f3c8-4fa0-9cc6-74019186fe00-kube-proxy\") pod \"kube-proxy-plm72\" (UID: \"998dc867-f3c8-4fa0-9cc6-74019186fe00\") " pod="kube-system/kube-proxy-plm72" Jun 25 18:31:03.367155 kubelet[2484]: I0625 18:31:03.367045 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-config-path\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367279 kubelet[2484]: I0625 18:31:03.367063 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/998dc867-f3c8-4fa0-9cc6-74019186fe00-lib-modules\") pod \"kube-proxy-plm72\" (UID: \"998dc867-f3c8-4fa0-9cc6-74019186fe00\") " pod="kube-system/kube-proxy-plm72" Jun 25 18:31:03.367279 kubelet[2484]: I0625 18:31:03.367083 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1671219b-d50a-40ca-b58b-ad54be33e035-clustermesh-secrets\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.367279 kubelet[2484]: I0625 18:31:03.367103 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-host-proc-sys-net\") pod \"cilium-9xrtj\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " pod="kube-system/cilium-9xrtj" Jun 25 18:31:03.512069 kubelet[2484]: E0625 18:31:03.511909 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:03.513746 containerd[1444]: time="2024-06-25T18:31:03.513698027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-plm72,Uid:998dc867-f3c8-4fa0-9cc6-74019186fe00,Namespace:kube-system,Attempt:0,}" Jun 25 18:31:03.519112 kubelet[2484]: E0625 18:31:03.519068 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:03.521054 containerd[1444]: time="2024-06-25T18:31:03.520840200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9xrtj,Uid:1671219b-d50a-40ca-b58b-ad54be33e035,Namespace:kube-system,Attempt:0,}" Jun 25 18:31:03.535583 containerd[1444]: time="2024-06-25T18:31:03.535481028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:03.535661 containerd[1444]: time="2024-06-25T18:31:03.535547468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:03.536218 containerd[1444]: time="2024-06-25T18:31:03.535566348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:03.538397 containerd[1444]: time="2024-06-25T18:31:03.536150389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:03.545032 containerd[1444]: time="2024-06-25T18:31:03.544951125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:03.545238 containerd[1444]: time="2024-06-25T18:31:03.545049486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:03.545662 containerd[1444]: time="2024-06-25T18:31:03.545438406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:03.545662 containerd[1444]: time="2024-06-25T18:31:03.545459006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:03.556952 systemd[1]: Started cri-containerd-6111e57957940a585a355be4705a7c36b99cda63cef3a835503c9c5dd8926469.scope - libcontainer container 6111e57957940a585a355be4705a7c36b99cda63cef3a835503c9c5dd8926469. Jun 25 18:31:03.561696 systemd[1]: Started cri-containerd-ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba.scope - libcontainer container ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba. Jun 25 18:31:03.575966 kubelet[2484]: E0625 18:31:03.575937 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:03.576404 containerd[1444]: time="2024-06-25T18:31:03.576358704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-d42fk,Uid:dec21266-f02e-4653-86db-ab1e4352f453,Namespace:kube-system,Attempt:0,}" Jun 25 18:31:03.589155 containerd[1444]: time="2024-06-25T18:31:03.589113688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-plm72,Uid:998dc867-f3c8-4fa0-9cc6-74019186fe00,Namespace:kube-system,Attempt:0,} returns sandbox id \"6111e57957940a585a355be4705a7c36b99cda63cef3a835503c9c5dd8926469\"" Jun 25 18:31:03.589963 kubelet[2484]: E0625 18:31:03.589942 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:03.592619 containerd[1444]: time="2024-06-25T18:31:03.592422535Z" level=info msg="CreateContainer within sandbox \"6111e57957940a585a355be4705a7c36b99cda63cef3a835503c9c5dd8926469\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:31:03.598810 containerd[1444]: time="2024-06-25T18:31:03.598740866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9xrtj,Uid:1671219b-d50a-40ca-b58b-ad54be33e035,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\"" Jun 25 18:31:03.599980 kubelet[2484]: E0625 18:31:03.599949 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:03.603545 containerd[1444]: time="2024-06-25T18:31:03.603413075Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 18:31:03.612210 containerd[1444]: time="2024-06-25T18:31:03.612110372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:03.612210 containerd[1444]: time="2024-06-25T18:31:03.612167212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:03.612210 containerd[1444]: time="2024-06-25T18:31:03.612181772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:03.612210 containerd[1444]: time="2024-06-25T18:31:03.612192132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:03.618001 containerd[1444]: time="2024-06-25T18:31:03.617943303Z" level=info msg="CreateContainer within sandbox \"6111e57957940a585a355be4705a7c36b99cda63cef3a835503c9c5dd8926469\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c04f8577fc4bd3abd7c5f8e5fb654bffce2554a9c724e91012ce428d99296268\"" Jun 25 18:31:03.618931 containerd[1444]: time="2024-06-25T18:31:03.618904584Z" level=info msg="StartContainer for \"c04f8577fc4bd3abd7c5f8e5fb654bffce2554a9c724e91012ce428d99296268\"" Jun 25 18:31:03.634032 systemd[1]: Started cri-containerd-591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8.scope - libcontainer container 591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8. Jun 25 18:31:03.650971 systemd[1]: Started cri-containerd-c04f8577fc4bd3abd7c5f8e5fb654bffce2554a9c724e91012ce428d99296268.scope - libcontainer container c04f8577fc4bd3abd7c5f8e5fb654bffce2554a9c724e91012ce428d99296268. Jun 25 18:31:03.675900 containerd[1444]: time="2024-06-25T18:31:03.675866491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-d42fk,Uid:dec21266-f02e-4653-86db-ab1e4352f453,Namespace:kube-system,Attempt:0,} returns sandbox id \"591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8\"" Jun 25 18:31:03.676419 kubelet[2484]: E0625 18:31:03.676388 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:03.682130 containerd[1444]: time="2024-06-25T18:31:03.682038983Z" level=info msg="StartContainer for \"c04f8577fc4bd3abd7c5f8e5fb654bffce2554a9c724e91012ce428d99296268\" returns successfully" Jun 25 18:31:04.334995 kubelet[2484]: E0625 18:31:04.334944 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:04.346909 kubelet[2484]: I0625 18:31:04.346847 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-plm72" podStartSLOduration=1.346802522 podCreationTimestamp="2024-06-25 18:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:31:04.34559332 +0000 UTC m=+15.156955870" watchObservedRunningTime="2024-06-25 18:31:04.346802522 +0000 UTC m=+15.158165152" Jun 25 18:31:09.665898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443318400.mount: Deactivated successfully. Jun 25 18:31:11.911560 containerd[1444]: time="2024-06-25T18:31:11.911506944Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:11.912473 containerd[1444]: time="2024-06-25T18:31:11.912281185Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651458" Jun 25 18:31:11.913380 containerd[1444]: time="2024-06-25T18:31:11.913115866Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:11.914794 containerd[1444]: time="2024-06-25T18:31:11.914714748Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.311263153s" Jun 25 18:31:11.914794 containerd[1444]: time="2024-06-25T18:31:11.914752828Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 25 18:31:11.915641 containerd[1444]: time="2024-06-25T18:31:11.915600309Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 18:31:11.921517 containerd[1444]: time="2024-06-25T18:31:11.921473717Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:31:11.934480 containerd[1444]: time="2024-06-25T18:31:11.934443134Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\"" Jun 25 18:31:11.934914 containerd[1444]: time="2024-06-25T18:31:11.934887094Z" level=info msg="StartContainer for \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\"" Jun 25 18:31:11.959939 systemd[1]: Started cri-containerd-d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916.scope - libcontainer container d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916. Jun 25 18:31:11.981340 containerd[1444]: time="2024-06-25T18:31:11.981291235Z" level=info msg="StartContainer for \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\" returns successfully" Jun 25 18:31:12.025830 systemd[1]: cri-containerd-d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916.scope: Deactivated successfully. Jun 25 18:31:12.225143 containerd[1444]: time="2024-06-25T18:31:12.225011780Z" level=info msg="shim disconnected" id=d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916 namespace=k8s.io Jun 25 18:31:12.225143 containerd[1444]: time="2024-06-25T18:31:12.225060820Z" level=warning msg="cleaning up after shim disconnected" id=d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916 namespace=k8s.io Jun 25 18:31:12.225143 containerd[1444]: time="2024-06-25T18:31:12.225069660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:31:12.370683 kubelet[2484]: E0625 18:31:12.370637 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:12.372946 containerd[1444]: time="2024-06-25T18:31:12.372900044Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:31:12.382175 containerd[1444]: time="2024-06-25T18:31:12.382123256Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\"" Jun 25 18:31:12.382641 containerd[1444]: time="2024-06-25T18:31:12.382616536Z" level=info msg="StartContainer for \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\"" Jun 25 18:31:12.406914 systemd[1]: Started cri-containerd-b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b.scope - libcontainer container b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b. Jun 25 18:31:12.428794 containerd[1444]: time="2024-06-25T18:31:12.427871913Z" level=info msg="StartContainer for \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\" returns successfully" Jun 25 18:31:12.447695 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:31:12.448006 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:12.448069 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:31:12.455166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:31:12.455372 systemd[1]: cri-containerd-b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b.scope: Deactivated successfully. Jun 25 18:31:12.471632 containerd[1444]: time="2024-06-25T18:31:12.471544567Z" level=info msg="shim disconnected" id=b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b namespace=k8s.io Jun 25 18:31:12.471632 containerd[1444]: time="2024-06-25T18:31:12.471628127Z" level=warning msg="cleaning up after shim disconnected" id=b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b namespace=k8s.io Jun 25 18:31:12.471632 containerd[1444]: time="2024-06-25T18:31:12.471638087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:31:12.487837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:12.929781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916-rootfs.mount: Deactivated successfully. Jun 25 18:31:13.121580 containerd[1444]: time="2024-06-25T18:31:13.121521292Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:13.122603 containerd[1444]: time="2024-06-25T18:31:13.122558813Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138262" Jun 25 18:31:13.123585 containerd[1444]: time="2024-06-25T18:31:13.123546294Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:31:13.125159 containerd[1444]: time="2024-06-25T18:31:13.125122536Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.209473347s" Jun 25 18:31:13.125196 containerd[1444]: time="2024-06-25T18:31:13.125161176Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 25 18:31:13.127788 containerd[1444]: time="2024-06-25T18:31:13.127733939Z" level=info msg="CreateContainer within sandbox \"591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 18:31:13.142510 containerd[1444]: time="2024-06-25T18:31:13.142466957Z" level=info msg="CreateContainer within sandbox \"591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\"" Jun 25 18:31:13.143138 containerd[1444]: time="2024-06-25T18:31:13.142907317Z" level=info msg="StartContainer for \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\"" Jun 25 18:31:13.169925 systemd[1]: Started cri-containerd-ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57.scope - libcontainer container ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57. Jun 25 18:31:13.194116 containerd[1444]: time="2024-06-25T18:31:13.194022298Z" level=info msg="StartContainer for \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\" returns successfully" Jun 25 18:31:13.373638 kubelet[2484]: E0625 18:31:13.373328 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:13.377432 kubelet[2484]: E0625 18:31:13.377044 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:13.378944 containerd[1444]: time="2024-06-25T18:31:13.378896000Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:31:13.389230 kubelet[2484]: I0625 18:31:13.389186 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-d42fk" podStartSLOduration=0.94090017 podCreationTimestamp="2024-06-25 18:31:03 +0000 UTC" firstStartedPulling="2024-06-25 18:31:03.677154334 +0000 UTC m=+14.488516884" lastFinishedPulling="2024-06-25 18:31:13.125405856 +0000 UTC m=+23.936768406" observedRunningTime="2024-06-25 18:31:13.388330011 +0000 UTC m=+24.199692561" watchObservedRunningTime="2024-06-25 18:31:13.389151692 +0000 UTC m=+24.200514242" Jun 25 18:31:13.421911 containerd[1444]: time="2024-06-25T18:31:13.421860451Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\"" Jun 25 18:31:13.424628 containerd[1444]: time="2024-06-25T18:31:13.422634412Z" level=info msg="StartContainer for \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\"" Jun 25 18:31:13.454960 systemd[1]: Started cri-containerd-9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692.scope - libcontainer container 9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692. Jun 25 18:31:13.493894 containerd[1444]: time="2024-06-25T18:31:13.493837657Z" level=info msg="StartContainer for \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\" returns successfully" Jun 25 18:31:13.513110 systemd[1]: cri-containerd-9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692.scope: Deactivated successfully. Jun 25 18:31:13.537866 containerd[1444]: time="2024-06-25T18:31:13.537804390Z" level=info msg="shim disconnected" id=9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692 namespace=k8s.io Jun 25 18:31:13.545894 containerd[1444]: time="2024-06-25T18:31:13.545830320Z" level=warning msg="cleaning up after shim disconnected" id=9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692 namespace=k8s.io Jun 25 18:31:13.545894 containerd[1444]: time="2024-06-25T18:31:13.545872720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:31:14.381792 kubelet[2484]: E0625 18:31:14.380323 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:14.381792 kubelet[2484]: E0625 18:31:14.380332 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:14.382495 containerd[1444]: time="2024-06-25T18:31:14.382454223Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:31:14.398451 containerd[1444]: time="2024-06-25T18:31:14.398392601Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\"" Jun 25 18:31:14.399098 containerd[1444]: time="2024-06-25T18:31:14.399053842Z" level=info msg="StartContainer for \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\"" Jun 25 18:31:14.426957 systemd[1]: Started cri-containerd-bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962.scope - libcontainer container bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962. Jun 25 18:31:14.447658 containerd[1444]: time="2024-06-25T18:31:14.447614538Z" level=info msg="StartContainer for \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\" returns successfully" Jun 25 18:31:14.448210 systemd[1]: cri-containerd-bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962.scope: Deactivated successfully. Jun 25 18:31:14.463647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962-rootfs.mount: Deactivated successfully. Jun 25 18:31:14.469126 containerd[1444]: time="2024-06-25T18:31:14.469036802Z" level=info msg="shim disconnected" id=bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962 namespace=k8s.io Jun 25 18:31:14.469126 containerd[1444]: time="2024-06-25T18:31:14.469102602Z" level=warning msg="cleaning up after shim disconnected" id=bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962 namespace=k8s.io Jun 25 18:31:14.469782 containerd[1444]: time="2024-06-25T18:31:14.469112242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:31:15.384451 kubelet[2484]: E0625 18:31:15.384423 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:15.388806 containerd[1444]: time="2024-06-25T18:31:15.388234802Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:31:15.412065 containerd[1444]: time="2024-06-25T18:31:15.412011748Z" level=info msg="CreateContainer within sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\"" Jun 25 18:31:15.414097 containerd[1444]: time="2024-06-25T18:31:15.412697709Z" level=info msg="StartContainer for \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\"" Jun 25 18:31:15.442945 systemd[1]: Started cri-containerd-56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53.scope - libcontainer container 56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53. Jun 25 18:31:15.471272 containerd[1444]: time="2024-06-25T18:31:15.471153894Z" level=info msg="StartContainer for \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\" returns successfully" Jun 25 18:31:15.605779 kubelet[2484]: I0625 18:31:15.605729 2484 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:31:15.631406 kubelet[2484]: I0625 18:31:15.631179 2484 topology_manager.go:215] "Topology Admit Handler" podUID="62eab637-9f69-4869-9e46-f7f183009a9a" podNamespace="kube-system" podName="coredns-5dd5756b68-h6cfr" Jun 25 18:31:15.633511 kubelet[2484]: I0625 18:31:15.633246 2484 topology_manager.go:215] "Topology Admit Handler" podUID="277f5745-b99b-46b9-8da5-e437eb6cec73" podNamespace="kube-system" podName="coredns-5dd5756b68-6l65s" Jun 25 18:31:15.649180 systemd[1]: Created slice kubepods-burstable-pod62eab637_9f69_4869_9e46_f7f183009a9a.slice - libcontainer container kubepods-burstable-pod62eab637_9f69_4869_9e46_f7f183009a9a.slice. Jun 25 18:31:15.657216 systemd[1]: Created slice kubepods-burstable-pod277f5745_b99b_46b9_8da5_e437eb6cec73.slice - libcontainer container kubepods-burstable-pod277f5745_b99b_46b9_8da5_e437eb6cec73.slice. Jun 25 18:31:15.753099 kubelet[2484]: I0625 18:31:15.753060 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvfw\" (UniqueName: \"kubernetes.io/projected/277f5745-b99b-46b9-8da5-e437eb6cec73-kube-api-access-7xvfw\") pod \"coredns-5dd5756b68-6l65s\" (UID: \"277f5745-b99b-46b9-8da5-e437eb6cec73\") " pod="kube-system/coredns-5dd5756b68-6l65s" Jun 25 18:31:15.753099 kubelet[2484]: I0625 18:31:15.753110 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62eab637-9f69-4869-9e46-f7f183009a9a-config-volume\") pod \"coredns-5dd5756b68-h6cfr\" (UID: \"62eab637-9f69-4869-9e46-f7f183009a9a\") " pod="kube-system/coredns-5dd5756b68-h6cfr" Jun 25 18:31:15.753263 kubelet[2484]: I0625 18:31:15.753131 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw4xk\" (UniqueName: \"kubernetes.io/projected/62eab637-9f69-4869-9e46-f7f183009a9a-kube-api-access-kw4xk\") pod \"coredns-5dd5756b68-h6cfr\" (UID: \"62eab637-9f69-4869-9e46-f7f183009a9a\") " pod="kube-system/coredns-5dd5756b68-h6cfr" Jun 25 18:31:15.753263 kubelet[2484]: I0625 18:31:15.753151 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/277f5745-b99b-46b9-8da5-e437eb6cec73-config-volume\") pod \"coredns-5dd5756b68-6l65s\" (UID: \"277f5745-b99b-46b9-8da5-e437eb6cec73\") " pod="kube-system/coredns-5dd5756b68-6l65s" Jun 25 18:31:15.954630 kubelet[2484]: E0625 18:31:15.954490 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:15.959568 containerd[1444]: time="2024-06-25T18:31:15.958943873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h6cfr,Uid:62eab637-9f69-4869-9e46-f7f183009a9a,Namespace:kube-system,Attempt:0,}" Jun 25 18:31:15.960155 kubelet[2484]: E0625 18:31:15.960124 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:15.960828 containerd[1444]: time="2024-06-25T18:31:15.960756155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6l65s,Uid:277f5745-b99b-46b9-8da5-e437eb6cec73,Namespace:kube-system,Attempt:0,}" Jun 25 18:31:16.389354 kubelet[2484]: E0625 18:31:16.389205 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:16.401857 kubelet[2484]: I0625 18:31:16.401808 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9xrtj" podStartSLOduration=5.089579671 podCreationTimestamp="2024-06-25 18:31:03 +0000 UTC" firstStartedPulling="2024-06-25 18:31:03.602957194 +0000 UTC m=+14.414319744" lastFinishedPulling="2024-06-25 18:31:11.915149189 +0000 UTC m=+22.726511739" observedRunningTime="2024-06-25 18:31:16.401065946 +0000 UTC m=+27.212428496" watchObservedRunningTime="2024-06-25 18:31:16.401771666 +0000 UTC m=+27.213134216" Jun 25 18:31:17.391156 kubelet[2484]: E0625 18:31:17.391111 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:17.760166 systemd-networkd[1378]: cilium_host: Link UP Jun 25 18:31:17.760305 systemd-networkd[1378]: cilium_net: Link UP Jun 25 18:31:17.760308 systemd-networkd[1378]: cilium_net: Gained carrier Jun 25 18:31:17.760427 systemd-networkd[1378]: cilium_host: Gained carrier Jun 25 18:31:17.762195 systemd-networkd[1378]: cilium_host: Gained IPv6LL Jun 25 18:31:17.836772 systemd-networkd[1378]: cilium_vxlan: Link UP Jun 25 18:31:17.836779 systemd-networkd[1378]: cilium_vxlan: Gained carrier Jun 25 18:31:18.120807 kernel: NET: Registered PF_ALG protocol family Jun 25 18:31:18.392809 kubelet[2484]: E0625 18:31:18.392741 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:18.661910 systemd-networkd[1378]: lxc_health: Link UP Jun 25 18:31:18.675861 systemd-networkd[1378]: lxc_health: Gained carrier Jun 25 18:31:18.712913 systemd-networkd[1378]: cilium_net: Gained IPv6LL Jun 25 18:31:19.032923 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Jun 25 18:31:19.129030 systemd-networkd[1378]: lxca582b3bf6ab9: Link UP Jun 25 18:31:19.143787 kernel: eth0: renamed from tmp01b85 Jun 25 18:31:19.154088 systemd-networkd[1378]: lxca582b3bf6ab9: Gained carrier Jun 25 18:31:19.159082 systemd-networkd[1378]: lxcd2602e12ffef: Link UP Jun 25 18:31:19.168787 kernel: eth0: renamed from tmp8fd83 Jun 25 18:31:19.174586 systemd-networkd[1378]: lxcd2602e12ffef: Gained carrier Jun 25 18:31:19.523284 kubelet[2484]: E0625 18:31:19.523235 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:19.929182 systemd-networkd[1378]: lxc_health: Gained IPv6LL Jun 25 18:31:20.377959 systemd-networkd[1378]: lxca582b3bf6ab9: Gained IPv6LL Jun 25 18:31:20.696978 systemd-networkd[1378]: lxcd2602e12ffef: Gained IPv6LL Jun 25 18:31:21.083397 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:43778.service - OpenSSH per-connection server daemon (10.0.0.1:43778). Jun 25 18:31:21.121131 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 43778 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:21.122479 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:21.127589 systemd-logind[1424]: New session 8 of user core. Jun 25 18:31:21.144908 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:31:21.282715 sshd[3708]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:21.285937 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:43778.service: Deactivated successfully. Jun 25 18:31:21.288317 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:31:21.289203 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:31:21.290167 systemd-logind[1424]: Removed session 8. Jun 25 18:31:22.612193 containerd[1444]: time="2024-06-25T18:31:22.611971498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:22.612193 containerd[1444]: time="2024-06-25T18:31:22.612030578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:22.612193 containerd[1444]: time="2024-06-25T18:31:22.612044418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:22.612193 containerd[1444]: time="2024-06-25T18:31:22.612054058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:22.612752 containerd[1444]: time="2024-06-25T18:31:22.612093778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:31:22.612752 containerd[1444]: time="2024-06-25T18:31:22.612318218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:22.612752 containerd[1444]: time="2024-06-25T18:31:22.612339538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:31:22.612752 containerd[1444]: time="2024-06-25T18:31:22.612367338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:31:22.641924 systemd[1]: Started cri-containerd-01b855ad8b935d8a4f3d3c65829fb2b7db0879404020607ec5bc672ed6c13cca.scope - libcontainer container 01b855ad8b935d8a4f3d3c65829fb2b7db0879404020607ec5bc672ed6c13cca. Jun 25 18:31:22.643226 systemd[1]: Started cri-containerd-8fd830181f3255365ae4fdf495af431c4ba69177949e489e7b25309408bb5a1d.scope - libcontainer container 8fd830181f3255365ae4fdf495af431c4ba69177949e489e7b25309408bb5a1d. Jun 25 18:31:22.653628 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:31:22.659300 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:31:22.673319 containerd[1444]: time="2024-06-25T18:31:22.673281871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6l65s,Uid:277f5745-b99b-46b9-8da5-e437eb6cec73,Namespace:kube-system,Attempt:0,} returns sandbox id \"01b855ad8b935d8a4f3d3c65829fb2b7db0879404020607ec5bc672ed6c13cca\"" Jun 25 18:31:22.674414 kubelet[2484]: E0625 18:31:22.674378 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:22.677859 containerd[1444]: time="2024-06-25T18:31:22.677588034Z" level=info msg="CreateContainer within sandbox \"01b855ad8b935d8a4f3d3c65829fb2b7db0879404020607ec5bc672ed6c13cca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:31:22.685718 containerd[1444]: time="2024-06-25T18:31:22.685685881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h6cfr,Uid:62eab637-9f69-4869-9e46-f7f183009a9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fd830181f3255365ae4fdf495af431c4ba69177949e489e7b25309408bb5a1d\"" Jun 25 18:31:22.686742 kubelet[2484]: E0625 18:31:22.686725 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:22.688524 containerd[1444]: time="2024-06-25T18:31:22.688448484Z" level=info msg="CreateContainer within sandbox \"8fd830181f3255365ae4fdf495af431c4ba69177949e489e7b25309408bb5a1d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:31:22.703984 containerd[1444]: time="2024-06-25T18:31:22.703946137Z" level=info msg="CreateContainer within sandbox \"8fd830181f3255365ae4fdf495af431c4ba69177949e489e7b25309408bb5a1d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0ad01b7cbcc63cf5481bb24ef003d0c344e24cb3b583de11192989c5c3cdb21\"" Jun 25 18:31:22.705429 containerd[1444]: time="2024-06-25T18:31:22.704343497Z" level=info msg="StartContainer for \"a0ad01b7cbcc63cf5481bb24ef003d0c344e24cb3b583de11192989c5c3cdb21\"" Jun 25 18:31:22.726481 containerd[1444]: time="2024-06-25T18:31:22.726439837Z" level=info msg="CreateContainer within sandbox \"01b855ad8b935d8a4f3d3c65829fb2b7db0879404020607ec5bc672ed6c13cca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77d5a14e1fc17f7810c0be26b7e1b1eb7f3dfcdb170eebb951fd0dc9110166da\"" Jun 25 18:31:22.726927 containerd[1444]: time="2024-06-25T18:31:22.726809037Z" level=info msg="StartContainer for \"77d5a14e1fc17f7810c0be26b7e1b1eb7f3dfcdb170eebb951fd0dc9110166da\"" Jun 25 18:31:22.731918 systemd[1]: Started cri-containerd-a0ad01b7cbcc63cf5481bb24ef003d0c344e24cb3b583de11192989c5c3cdb21.scope - libcontainer container a0ad01b7cbcc63cf5481bb24ef003d0c344e24cb3b583de11192989c5c3cdb21. Jun 25 18:31:22.750018 systemd[1]: Started cri-containerd-77d5a14e1fc17f7810c0be26b7e1b1eb7f3dfcdb170eebb951fd0dc9110166da.scope - libcontainer container 77d5a14e1fc17f7810c0be26b7e1b1eb7f3dfcdb170eebb951fd0dc9110166da. Jun 25 18:31:22.766210 containerd[1444]: time="2024-06-25T18:31:22.766168711Z" level=info msg="StartContainer for \"a0ad01b7cbcc63cf5481bb24ef003d0c344e24cb3b583de11192989c5c3cdb21\" returns successfully" Jun 25 18:31:22.780292 containerd[1444]: time="2024-06-25T18:31:22.780181083Z" level=info msg="StartContainer for \"77d5a14e1fc17f7810c0be26b7e1b1eb7f3dfcdb170eebb951fd0dc9110166da\" returns successfully" Jun 25 18:31:23.404410 kubelet[2484]: E0625 18:31:23.404369 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:23.410045 kubelet[2484]: E0625 18:31:23.410016 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:23.424180 kubelet[2484]: I0625 18:31:23.424144 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h6cfr" podStartSLOduration=20.424109269 podCreationTimestamp="2024-06-25 18:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:31:23.421866507 +0000 UTC m=+34.233229017" watchObservedRunningTime="2024-06-25 18:31:23.424109269 +0000 UTC m=+34.235471819" Jun 25 18:31:23.424608 kubelet[2484]: I0625 18:31:23.424223 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6l65s" podStartSLOduration=20.424207229 podCreationTimestamp="2024-06-25 18:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:31:23.41328502 +0000 UTC m=+34.224647570" watchObservedRunningTime="2024-06-25 18:31:23.424207229 +0000 UTC m=+34.235569779" Jun 25 18:31:23.619747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511972638.mount: Deactivated successfully. Jun 25 18:31:24.410213 kubelet[2484]: E0625 18:31:24.410136 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:24.410213 kubelet[2484]: E0625 18:31:24.410186 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:25.429082 kubelet[2484]: E0625 18:31:25.429055 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:26.300280 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:43788.service - OpenSSH per-connection server daemon (10.0.0.1:43788). Jun 25 18:31:26.341087 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 43788 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:26.342568 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:26.346278 systemd-logind[1424]: New session 9 of user core. Jun 25 18:31:26.356930 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:31:26.485452 sshd[3897]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:26.488755 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:43788.service: Deactivated successfully. Jun 25 18:31:26.490595 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:31:26.491395 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:31:26.492067 systemd-logind[1424]: Removed session 9. Jun 25 18:31:27.133505 kubelet[2484]: I0625 18:31:27.131686 2484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:31:27.133505 kubelet[2484]: E0625 18:31:27.133106 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:27.432798 kubelet[2484]: E0625 18:31:27.432673 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:31:31.496339 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:37576.service - OpenSSH per-connection server daemon (10.0.0.1:37576). Jun 25 18:31:31.531585 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 37576 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:31.533122 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:31.536837 systemd-logind[1424]: New session 10 of user core. Jun 25 18:31:31.546904 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:31:31.656371 sshd[3914]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:31.659409 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:37576.service: Deactivated successfully. Jun 25 18:31:31.661222 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:31:31.661911 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:31:31.662600 systemd-logind[1424]: Removed session 10. Jun 25 18:31:36.668441 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:37584.service - OpenSSH per-connection server daemon (10.0.0.1:37584). Jun 25 18:31:36.700803 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 37584 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:36.701944 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:36.705195 systemd-logind[1424]: New session 11 of user core. Jun 25 18:31:36.716918 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:31:36.824904 sshd[3932]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:36.838271 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:37584.service: Deactivated successfully. Jun 25 18:31:36.839836 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:31:36.841104 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:31:36.843035 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:37590.service - OpenSSH per-connection server daemon (10.0.0.1:37590). Jun 25 18:31:36.844028 systemd-logind[1424]: Removed session 11. Jun 25 18:31:36.877292 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 37590 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:36.878478 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:36.882509 systemd-logind[1424]: New session 12 of user core. Jun 25 18:31:36.888980 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:31:37.563045 sshd[3947]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:37.574402 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:37590.service: Deactivated successfully. Jun 25 18:31:37.577440 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:31:37.582881 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:31:37.591057 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:37606.service - OpenSSH per-connection server daemon (10.0.0.1:37606). Jun 25 18:31:37.592830 systemd-logind[1424]: Removed session 12. Jun 25 18:31:37.622549 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 37606 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:37.623745 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:37.628550 systemd-logind[1424]: New session 13 of user core. Jun 25 18:31:37.636897 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:31:37.750630 sshd[3960]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:37.753820 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:37606.service: Deactivated successfully. Jun 25 18:31:37.756443 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:31:37.757501 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:31:37.758644 systemd-logind[1424]: Removed session 13. Jun 25 18:31:42.765571 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:50400.service - OpenSSH per-connection server daemon (10.0.0.1:50400). Jun 25 18:31:42.797274 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 50400 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:42.798373 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:42.802057 systemd-logind[1424]: New session 14 of user core. Jun 25 18:31:42.813891 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:31:42.920823 sshd[3974]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:42.933043 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:50400.service: Deactivated successfully. Jun 25 18:31:42.934617 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:31:42.936195 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:31:42.949075 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:50402.service - OpenSSH per-connection server daemon (10.0.0.1:50402). Jun 25 18:31:42.950468 systemd-logind[1424]: Removed session 14. Jun 25 18:31:42.977802 sshd[3989]: Accepted publickey for core from 10.0.0.1 port 50402 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:42.979019 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:42.982434 systemd-logind[1424]: New session 15 of user core. Jun 25 18:31:42.989942 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:31:43.208903 sshd[3989]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:43.217087 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:50402.service: Deactivated successfully. Jun 25 18:31:43.220226 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:31:43.221669 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:31:43.222681 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:50404.service - OpenSSH per-connection server daemon (10.0.0.1:50404). Jun 25 18:31:43.223825 systemd-logind[1424]: Removed session 15. Jun 25 18:31:43.259946 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 50404 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:43.261129 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:43.264462 systemd-logind[1424]: New session 16 of user core. Jun 25 18:31:43.269937 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:31:44.031178 sshd[4001]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:44.044572 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:50404.service: Deactivated successfully. Jun 25 18:31:44.046276 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:31:44.049284 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:31:44.056049 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:50420.service - OpenSSH per-connection server daemon (10.0.0.1:50420). Jun 25 18:31:44.058496 systemd-logind[1424]: Removed session 16. Jun 25 18:31:44.087405 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 50420 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:44.088693 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:44.092643 systemd-logind[1424]: New session 17 of user core. Jun 25 18:31:44.101899 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:31:44.384072 sshd[4022]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:44.392519 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:50420.service: Deactivated successfully. Jun 25 18:31:44.396463 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:31:44.399502 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:31:44.411051 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:50430.service - OpenSSH per-connection server daemon (10.0.0.1:50430). Jun 25 18:31:44.412165 systemd-logind[1424]: Removed session 17. Jun 25 18:31:44.440139 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 50430 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:44.441429 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:44.445080 systemd-logind[1424]: New session 18 of user core. Jun 25 18:31:44.456926 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:31:44.565472 sshd[4036]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:44.569029 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:50430.service: Deactivated successfully. Jun 25 18:31:44.571910 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:31:44.573743 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:31:44.575219 systemd-logind[1424]: Removed session 18. Jun 25 18:31:49.577809 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:42206.service - OpenSSH per-connection server daemon (10.0.0.1:42206). Jun 25 18:31:49.609966 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 42206 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:49.611288 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:49.614579 systemd-logind[1424]: New session 19 of user core. Jun 25 18:31:49.620911 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:31:49.729340 sshd[4055]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:49.732294 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:42206.service: Deactivated successfully. Jun 25 18:31:49.734013 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:31:49.734658 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:31:49.735441 systemd-logind[1424]: Removed session 19. Jun 25 18:31:54.740424 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:42212.service - OpenSSH per-connection server daemon (10.0.0.1:42212). Jun 25 18:31:54.773953 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 42212 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:54.775157 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:54.779824 systemd-logind[1424]: New session 20 of user core. Jun 25 18:31:54.790938 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:31:54.895655 sshd[4070]: pam_unix(sshd:session): session closed for user core Jun 25 18:31:54.898739 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:42212.service: Deactivated successfully. Jun 25 18:31:54.900622 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:31:54.901926 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:31:54.902711 systemd-logind[1424]: Removed session 20. Jun 25 18:31:59.910950 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:37388.service - OpenSSH per-connection server daemon (10.0.0.1:37388). Jun 25 18:31:59.945842 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 37388 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:31:59.946535 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:31:59.950716 systemd-logind[1424]: New session 21 of user core. Jun 25 18:31:59.958961 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:32:00.067106 sshd[4084]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:00.070220 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:37388.service: Deactivated successfully. Jun 25 18:32:00.071939 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:32:00.072592 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:32:00.073706 systemd-logind[1424]: Removed session 21. Jun 25 18:32:05.081296 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:37392.service - OpenSSH per-connection server daemon (10.0.0.1:37392). Jun 25 18:32:05.116391 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 37392 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:32:05.117729 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:05.122962 systemd-logind[1424]: New session 22 of user core. Jun 25 18:32:05.132904 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:32:05.239215 sshd[4102]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:05.251263 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:37392.service: Deactivated successfully. Jun 25 18:32:05.253615 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:32:05.255561 systemd-logind[1424]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:32:05.257300 systemd[1]: Started sshd@22-10.0.0.73:22-10.0.0.1:37404.service - OpenSSH per-connection server daemon (10.0.0.1:37404). Jun 25 18:32:05.258691 systemd-logind[1424]: Removed session 22. Jun 25 18:32:05.289498 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 37404 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:32:05.290666 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:05.294179 systemd-logind[1424]: New session 23 of user core. Jun 25 18:32:05.304885 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:32:07.408677 containerd[1444]: time="2024-06-25T18:32:07.408454873Z" level=info msg="StopContainer for \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\" with timeout 30 (s)" Jun 25 18:32:07.419584 containerd[1444]: time="2024-06-25T18:32:07.419551373Z" level=info msg="Stop container \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\" with signal terminated" Jun 25 18:32:07.428229 systemd[1]: cri-containerd-ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57.scope: Deactivated successfully. Jun 25 18:32:07.438992 containerd[1444]: time="2024-06-25T18:32:07.438876917Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:32:07.445383 containerd[1444]: time="2024-06-25T18:32:07.445196911Z" level=info msg="StopContainer for \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\" with timeout 2 (s)" Jun 25 18:32:07.445727 containerd[1444]: time="2024-06-25T18:32:07.445700314Z" level=info msg="Stop container \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\" with signal terminated" Jun 25 18:32:07.448186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57-rootfs.mount: Deactivated successfully. Jun 25 18:32:07.452974 systemd-networkd[1378]: lxc_health: Link DOWN Jun 25 18:32:07.452980 systemd-networkd[1378]: lxc_health: Lost carrier Jun 25 18:32:07.454877 containerd[1444]: time="2024-06-25T18:32:07.454674203Z" level=info msg="shim disconnected" id=ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57 namespace=k8s.io Jun 25 18:32:07.454877 containerd[1444]: time="2024-06-25T18:32:07.454719483Z" level=warning msg="cleaning up after shim disconnected" id=ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57 namespace=k8s.io Jun 25 18:32:07.454877 containerd[1444]: time="2024-06-25T18:32:07.454728003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:07.467620 containerd[1444]: time="2024-06-25T18:32:07.467563873Z" level=info msg="StopContainer for \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\" returns successfully" Jun 25 18:32:07.469285 containerd[1444]: time="2024-06-25T18:32:07.469257082Z" level=info msg="StopPodSandbox for \"591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8\"" Jun 25 18:32:07.473942 containerd[1444]: time="2024-06-25T18:32:07.469305242Z" level=info msg="Container to stop \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:32:07.475884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8-shm.mount: Deactivated successfully. Jun 25 18:32:07.476557 systemd[1]: cri-containerd-56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53.scope: Deactivated successfully. Jun 25 18:32:07.479012 systemd[1]: cri-containerd-56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53.scope: Consumed 6.374s CPU time. Jun 25 18:32:07.480473 systemd[1]: cri-containerd-591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8.scope: Deactivated successfully. Jun 25 18:32:07.499078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53-rootfs.mount: Deactivated successfully. Jun 25 18:32:07.501689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8-rootfs.mount: Deactivated successfully. Jun 25 18:32:07.504219 containerd[1444]: time="2024-06-25T18:32:07.504042150Z" level=info msg="shim disconnected" id=56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53 namespace=k8s.io Jun 25 18:32:07.504219 containerd[1444]: time="2024-06-25T18:32:07.504089030Z" level=warning msg="cleaning up after shim disconnected" id=56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53 namespace=k8s.io Jun 25 18:32:07.504219 containerd[1444]: time="2024-06-25T18:32:07.504098150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:07.504219 containerd[1444]: time="2024-06-25T18:32:07.504158351Z" level=info msg="shim disconnected" id=591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8 namespace=k8s.io Jun 25 18:32:07.504219 containerd[1444]: time="2024-06-25T18:32:07.504191231Z" level=warning msg="cleaning up after shim disconnected" id=591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8 namespace=k8s.io Jun 25 18:32:07.504219 containerd[1444]: time="2024-06-25T18:32:07.504199031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:07.517731 containerd[1444]: time="2024-06-25T18:32:07.517615944Z" level=info msg="StopContainer for \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\" returns successfully" Jun 25 18:32:07.518254 containerd[1444]: time="2024-06-25T18:32:07.518141986Z" level=info msg="StopPodSandbox for \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\"" Jun 25 18:32:07.518254 containerd[1444]: time="2024-06-25T18:32:07.518195587Z" level=info msg="Container to stop \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:32:07.518427 containerd[1444]: time="2024-06-25T18:32:07.518240747Z" level=info msg="Container to stop \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:32:07.518427 containerd[1444]: time="2024-06-25T18:32:07.518364228Z" level=info msg="Container to stop \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:32:07.518427 containerd[1444]: time="2024-06-25T18:32:07.518376468Z" level=info msg="Container to stop \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:32:07.518427 containerd[1444]: time="2024-06-25T18:32:07.518385868Z" level=info msg="Container to stop \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:32:07.520615 containerd[1444]: time="2024-06-25T18:32:07.520586800Z" level=info msg="TearDown network for sandbox \"591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8\" successfully" Jun 25 18:32:07.520615 containerd[1444]: time="2024-06-25T18:32:07.520615320Z" level=info msg="StopPodSandbox for \"591f6bfb8061735b0104666310a96978d6c46f9f4ebc4aaab2cc49ad871a5ac8\" returns successfully" Jun 25 18:32:07.524402 systemd[1]: cri-containerd-ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba.scope: Deactivated successfully. Jun 25 18:32:07.545130 containerd[1444]: time="2024-06-25T18:32:07.545063292Z" level=info msg="shim disconnected" id=ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba namespace=k8s.io Jun 25 18:32:07.545130 containerd[1444]: time="2024-06-25T18:32:07.545127053Z" level=warning msg="cleaning up after shim disconnected" id=ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba namespace=k8s.io Jun 25 18:32:07.545130 containerd[1444]: time="2024-06-25T18:32:07.545136013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:07.555409 containerd[1444]: time="2024-06-25T18:32:07.555370188Z" level=info msg="TearDown network for sandbox \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" successfully" Jun 25 18:32:07.555409 containerd[1444]: time="2024-06-25T18:32:07.555405028Z" level=info msg="StopPodSandbox for \"ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba\" returns successfully" Jun 25 18:32:07.641937 kubelet[2484]: I0625 18:32:07.641904 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dec21266-f02e-4653-86db-ab1e4352f453-cilium-config-path\") pod \"dec21266-f02e-4653-86db-ab1e4352f453\" (UID: \"dec21266-f02e-4653-86db-ab1e4352f453\") " Jun 25 18:32:07.641937 kubelet[2484]: I0625 18:32:07.641943 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-etc-cni-netd\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642330 kubelet[2484]: I0625 18:32:07.641960 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-run\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642330 kubelet[2484]: I0625 18:32:07.641983 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cni-path\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642330 kubelet[2484]: I0625 18:32:07.642001 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-xtables-lock\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642330 kubelet[2484]: I0625 18:32:07.642021 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1671219b-d50a-40ca-b58b-ad54be33e035-clustermesh-secrets\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642330 kubelet[2484]: I0625 18:32:07.642019 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.642330 kubelet[2484]: I0625 18:32:07.642042 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45g8j\" (UniqueName: \"kubernetes.io/projected/1671219b-d50a-40ca-b58b-ad54be33e035-kube-api-access-45g8j\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642468 kubelet[2484]: I0625 18:32:07.642104 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-host-proc-sys-net\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642468 kubelet[2484]: I0625 18:32:07.642131 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1671219b-d50a-40ca-b58b-ad54be33e035-hubble-tls\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642468 kubelet[2484]: I0625 18:32:07.642152 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n5zs\" (UniqueName: \"kubernetes.io/projected/dec21266-f02e-4653-86db-ab1e4352f453-kube-api-access-5n5zs\") pod \"dec21266-f02e-4653-86db-ab1e4352f453\" (UID: \"dec21266-f02e-4653-86db-ab1e4352f453\") " Jun 25 18:32:07.642468 kubelet[2484]: I0625 18:32:07.642171 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-config-path\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642468 kubelet[2484]: I0625 18:32:07.642189 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-cgroup\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642468 kubelet[2484]: I0625 18:32:07.642215 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-hostproc\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642591 kubelet[2484]: I0625 18:32:07.642235 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-bpf-maps\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642591 kubelet[2484]: I0625 18:32:07.642259 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-lib-modules\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642591 kubelet[2484]: I0625 18:32:07.642277 2484 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-host-proc-sys-kernel\") pod \"1671219b-d50a-40ca-b58b-ad54be33e035\" (UID: \"1671219b-d50a-40ca-b58b-ad54be33e035\") " Jun 25 18:32:07.642591 kubelet[2484]: I0625 18:32:07.642312 2484 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.642591 kubelet[2484]: I0625 18:32:07.642334 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.642591 kubelet[2484]: I0625 18:32:07.642353 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.642789 kubelet[2484]: I0625 18:32:07.642618 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.643103 kubelet[2484]: I0625 18:32:07.643080 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-hostproc" (OuterVolumeSpecName: "hostproc") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.643151 kubelet[2484]: I0625 18:32:07.643114 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.643151 kubelet[2484]: I0625 18:32:07.643130 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.643193 kubelet[2484]: I0625 18:32:07.643150 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cni-path" (OuterVolumeSpecName: "cni-path") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.643193 kubelet[2484]: I0625 18:32:07.643165 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.644892 kubelet[2484]: I0625 18:32:07.643918 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec21266-f02e-4653-86db-ab1e4352f453-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dec21266-f02e-4653-86db-ab1e4352f453" (UID: "dec21266-f02e-4653-86db-ab1e4352f453"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:32:07.644892 kubelet[2484]: I0625 18:32:07.643976 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:32:07.645039 kubelet[2484]: I0625 18:32:07.645005 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:32:07.646495 kubelet[2484]: I0625 18:32:07.646465 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1671219b-d50a-40ca-b58b-ad54be33e035-kube-api-access-45g8j" (OuterVolumeSpecName: "kube-api-access-45g8j") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "kube-api-access-45g8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:32:07.646576 kubelet[2484]: I0625 18:32:07.646496 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec21266-f02e-4653-86db-ab1e4352f453-kube-api-access-5n5zs" (OuterVolumeSpecName: "kube-api-access-5n5zs") pod "dec21266-f02e-4653-86db-ab1e4352f453" (UID: "dec21266-f02e-4653-86db-ab1e4352f453"). InnerVolumeSpecName "kube-api-access-5n5zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:32:07.646576 kubelet[2484]: I0625 18:32:07.646570 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1671219b-d50a-40ca-b58b-ad54be33e035-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:32:07.646625 kubelet[2484]: I0625 18:32:07.646584 2484 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1671219b-d50a-40ca-b58b-ad54be33e035-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1671219b-d50a-40ca-b58b-ad54be33e035" (UID: "1671219b-d50a-40ca-b58b-ad54be33e035"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:32:07.743034 kubelet[2484]: I0625 18:32:07.742937 2484 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743034 kubelet[2484]: I0625 18:32:07.742968 2484 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743034 kubelet[2484]: I0625 18:32:07.742978 2484 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743034 kubelet[2484]: I0625 18:32:07.742990 2484 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743034 kubelet[2484]: I0625 18:32:07.742999 2484 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743034 kubelet[2484]: I0625 18:32:07.743009 2484 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1671219b-d50a-40ca-b58b-ad54be33e035-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743034 kubelet[2484]: I0625 18:32:07.743019 2484 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dec21266-f02e-4653-86db-ab1e4352f453-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743250 kubelet[2484]: I0625 18:32:07.743050 2484 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743250 kubelet[2484]: I0625 18:32:07.743061 2484 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-45g8j\" (UniqueName: \"kubernetes.io/projected/1671219b-d50a-40ca-b58b-ad54be33e035-kube-api-access-45g8j\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743250 kubelet[2484]: I0625 18:32:07.743069 2484 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743250 kubelet[2484]: I0625 18:32:07.743078 2484 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1671219b-d50a-40ca-b58b-ad54be33e035-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743250 kubelet[2484]: I0625 18:32:07.743088 2484 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5n5zs\" (UniqueName: \"kubernetes.io/projected/dec21266-f02e-4653-86db-ab1e4352f453-kube-api-access-5n5zs\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743250 kubelet[2484]: I0625 18:32:07.743097 2484 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743250 kubelet[2484]: I0625 18:32:07.743105 2484 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:07.743250 kubelet[2484]: I0625 18:32:07.743114 2484 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1671219b-d50a-40ca-b58b-ad54be33e035-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 25 18:32:08.426867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba-rootfs.mount: Deactivated successfully. Jun 25 18:32:08.426968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad1162466dee1402c9a86db4a8e25865f89a9b883d3b9d33438012cd04e872ba-shm.mount: Deactivated successfully. Jun 25 18:32:08.427027 systemd[1]: var-lib-kubelet-pods-dec21266\x2df02e\x2d4653\x2d86db\x2dab1e4352f453-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5n5zs.mount: Deactivated successfully. Jun 25 18:32:08.427083 systemd[1]: var-lib-kubelet-pods-1671219b\x2dd50a\x2d40ca\x2db58b\x2dad54be33e035-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d45g8j.mount: Deactivated successfully. Jun 25 18:32:08.427130 systemd[1]: var-lib-kubelet-pods-1671219b\x2dd50a\x2d40ca\x2db58b\x2dad54be33e035-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 18:32:08.427178 systemd[1]: var-lib-kubelet-pods-1671219b\x2dd50a\x2d40ca\x2db58b\x2dad54be33e035-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 18:32:08.508132 kubelet[2484]: I0625 18:32:08.508078 2484 scope.go:117] "RemoveContainer" containerID="ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57" Jun 25 18:32:08.510636 containerd[1444]: time="2024-06-25T18:32:08.510583853Z" level=info msg="RemoveContainer for \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\"" Jun 25 18:32:08.512001 systemd[1]: Removed slice kubepods-besteffort-poddec21266_f02e_4653_86db_ab1e4352f453.slice - libcontainer container kubepods-besteffort-poddec21266_f02e_4653_86db_ab1e4352f453.slice. Jun 25 18:32:08.517490 containerd[1444]: time="2024-06-25T18:32:08.517459090Z" level=info msg="RemoveContainer for \"ff957549db3a9155eba1052df1f78dd05c8c95c2d1a2c4e94bee26185dd62c57\" returns successfully" Jun 25 18:32:08.518417 systemd[1]: Removed slice kubepods-burstable-pod1671219b_d50a_40ca_b58b_ad54be33e035.slice - libcontainer container kubepods-burstable-pod1671219b_d50a_40ca_b58b_ad54be33e035.slice. Jun 25 18:32:08.518506 systemd[1]: kubepods-burstable-pod1671219b_d50a_40ca_b58b_ad54be33e035.slice: Consumed 6.519s CPU time. Jun 25 18:32:08.519201 kubelet[2484]: I0625 18:32:08.518935 2484 scope.go:117] "RemoveContainer" containerID="56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53" Jun 25 18:32:08.519912 containerd[1444]: time="2024-06-25T18:32:08.519880823Z" level=info msg="RemoveContainer for \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\"" Jun 25 18:32:08.523284 containerd[1444]: time="2024-06-25T18:32:08.523251080Z" level=info msg="RemoveContainer for \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\" returns successfully" Jun 25 18:32:08.523447 kubelet[2484]: I0625 18:32:08.523416 2484 scope.go:117] "RemoveContainer" containerID="bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962" Jun 25 18:32:08.526091 containerd[1444]: time="2024-06-25T18:32:08.526057855Z" level=info msg="RemoveContainer for \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\"" Jun 25 18:32:08.528357 containerd[1444]: time="2024-06-25T18:32:08.528326187Z" level=info msg="RemoveContainer for \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\" returns successfully" Jun 25 18:32:08.528524 kubelet[2484]: I0625 18:32:08.528506 2484 scope.go:117] "RemoveContainer" containerID="9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692" Jun 25 18:32:08.529228 containerd[1444]: time="2024-06-25T18:32:08.529196592Z" level=info msg="RemoveContainer for \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\"" Jun 25 18:32:08.531277 containerd[1444]: time="2024-06-25T18:32:08.531240683Z" level=info msg="RemoveContainer for \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\" returns successfully" Jun 25 18:32:08.531419 kubelet[2484]: I0625 18:32:08.531398 2484 scope.go:117] "RemoveContainer" containerID="b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b" Jun 25 18:32:08.532722 containerd[1444]: time="2024-06-25T18:32:08.532690850Z" level=info msg="RemoveContainer for \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\"" Jun 25 18:32:08.535675 containerd[1444]: time="2024-06-25T18:32:08.535642466Z" level=info msg="RemoveContainer for \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\" returns successfully" Jun 25 18:32:08.537745 kubelet[2484]: I0625 18:32:08.537708 2484 scope.go:117] "RemoveContainer" containerID="d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916" Jun 25 18:32:08.543744 containerd[1444]: time="2024-06-25T18:32:08.543695748Z" level=info msg="RemoveContainer for \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\"" Jun 25 18:32:08.547404 containerd[1444]: time="2024-06-25T18:32:08.547344648Z" level=info msg="RemoveContainer for \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\" returns successfully" Jun 25 18:32:08.549050 kubelet[2484]: I0625 18:32:08.548941 2484 scope.go:117] "RemoveContainer" containerID="56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53" Jun 25 18:32:08.556120 containerd[1444]: time="2024-06-25T18:32:08.549178017Z" level=error msg="ContainerStatus for \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\": not found" Jun 25 18:32:08.558076 kubelet[2484]: E0625 18:32:08.558046 2484 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\": not found" containerID="56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53" Jun 25 18:32:08.558174 kubelet[2484]: I0625 18:32:08.558149 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53"} err="failed to get container status \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\": rpc error: code = NotFound desc = an error occurred when try to find container \"56a2562e680be96c0093a84758771f3e500f18c2f7e5a92617bf2de2d0111b53\": not found" Jun 25 18:32:08.558174 kubelet[2484]: I0625 18:32:08.558164 2484 scope.go:117] "RemoveContainer" containerID="bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962" Jun 25 18:32:08.558441 containerd[1444]: time="2024-06-25T18:32:08.558400706Z" level=error msg="ContainerStatus for \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\": not found" Jun 25 18:32:08.558696 kubelet[2484]: E0625 18:32:08.558582 2484 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\": not found" containerID="bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962" Jun 25 18:32:08.558696 kubelet[2484]: I0625 18:32:08.558614 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962"} err="failed to get container status \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdbe136d370a6ff37f2b98aa867dd4597c756280ad3e35a90e6e44111a745962\": not found" Jun 25 18:32:08.558696 kubelet[2484]: I0625 18:32:08.558625 2484 scope.go:117] "RemoveContainer" containerID="9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692" Jun 25 18:32:08.560386 containerd[1444]: time="2024-06-25T18:32:08.558803428Z" level=error msg="ContainerStatus for \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\": not found" Jun 25 18:32:08.560454 kubelet[2484]: E0625 18:32:08.558934 2484 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\": not found" containerID="9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692" Jun 25 18:32:08.560454 kubelet[2484]: I0625 18:32:08.558961 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692"} err="failed to get container status \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a9fbe2fb648e6107141f9eed7ff8e0e9c3251ae5672121bd757046db44d1692\": not found" Jun 25 18:32:08.560454 kubelet[2484]: I0625 18:32:08.558974 2484 scope.go:117] "RemoveContainer" containerID="b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b" Jun 25 18:32:08.560814 containerd[1444]: time="2024-06-25T18:32:08.560774439Z" level=error msg="ContainerStatus for \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\": not found" Jun 25 18:32:08.560951 kubelet[2484]: E0625 18:32:08.560927 2484 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\": not found" containerID="b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b" Jun 25 18:32:08.560987 kubelet[2484]: I0625 18:32:08.560960 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b"} err="failed to get container status \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b04e1f01beedf094f703dc6a15a6e7cc737ff103d36dfcfb4681268b9e92f43b\": not found" Jun 25 18:32:08.560987 kubelet[2484]: I0625 18:32:08.560972 2484 scope.go:117] "RemoveContainer" containerID="d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916" Jun 25 18:32:08.561193 containerd[1444]: time="2024-06-25T18:32:08.561152361Z" level=error msg="ContainerStatus for \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\": not found" Jun 25 18:32:08.561331 kubelet[2484]: E0625 18:32:08.561314 2484 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\": not found" containerID="d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916" Jun 25 18:32:08.561374 kubelet[2484]: I0625 18:32:08.561341 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916"} err="failed to get container status \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5f6a5b5d439d547a76635af896ececf752b8bad739f50e2f5c093292a53a916\": not found" Jun 25 18:32:09.286523 kubelet[2484]: E0625 18:32:09.286425 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:09.288480 kubelet[2484]: I0625 18:32:09.288441 2484 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1671219b-d50a-40ca-b58b-ad54be33e035" path="/var/lib/kubelet/pods/1671219b-d50a-40ca-b58b-ad54be33e035/volumes" Jun 25 18:32:09.289000 kubelet[2484]: I0625 18:32:09.288984 2484 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dec21266-f02e-4653-86db-ab1e4352f453" path="/var/lib/kubelet/pods/dec21266-f02e-4653-86db-ab1e4352f453/volumes" Jun 25 18:32:09.363173 kubelet[2484]: E0625 18:32:09.363145 2484 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:32:09.378989 sshd[4116]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:09.386504 systemd[1]: sshd@22-10.0.0.73:22-10.0.0.1:37404.service: Deactivated successfully. Jun 25 18:32:09.388043 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:32:09.388229 systemd[1]: session-23.scope: Consumed 1.449s CPU time. Jun 25 18:32:09.389896 systemd-logind[1424]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:32:09.403342 systemd[1]: Started sshd@23-10.0.0.73:22-10.0.0.1:53768.service - OpenSSH per-connection server daemon (10.0.0.1:53768). Jun 25 18:32:09.404256 systemd-logind[1424]: Removed session 23. Jun 25 18:32:09.431130 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 53768 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:32:09.432246 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:09.435851 systemd-logind[1424]: New session 24 of user core. Jun 25 18:32:09.450901 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:32:10.128066 kubelet[2484]: I0625 18:32:10.128032 2484 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-25T18:32:10Z","lastTransitionTime":"2024-06-25T18:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 25 18:32:10.296671 sshd[4279]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:10.305141 systemd[1]: sshd@23-10.0.0.73:22-10.0.0.1:53768.service: Deactivated successfully. Jun 25 18:32:10.310472 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:32:10.311176 systemd-logind[1424]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:32:10.313127 kubelet[2484]: I0625 18:32:10.313085 2484 topology_manager.go:215] "Topology Admit Handler" podUID="0df2743c-4eed-4610-ae27-60feb0ff62a9" podNamespace="kube-system" podName="cilium-gmb8q" Jun 25 18:32:10.313396 kubelet[2484]: E0625 18:32:10.313144 2484 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1671219b-d50a-40ca-b58b-ad54be33e035" containerName="apply-sysctl-overwrites" Jun 25 18:32:10.313396 kubelet[2484]: E0625 18:32:10.313156 2484 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1671219b-d50a-40ca-b58b-ad54be33e035" containerName="clean-cilium-state" Jun 25 18:32:10.313396 kubelet[2484]: E0625 18:32:10.313164 2484 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1671219b-d50a-40ca-b58b-ad54be33e035" containerName="cilium-agent" Jun 25 18:32:10.313396 kubelet[2484]: E0625 18:32:10.313171 2484 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1671219b-d50a-40ca-b58b-ad54be33e035" containerName="mount-cgroup" Jun 25 18:32:10.313396 kubelet[2484]: E0625 18:32:10.313177 2484 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dec21266-f02e-4653-86db-ab1e4352f453" containerName="cilium-operator" Jun 25 18:32:10.313396 kubelet[2484]: E0625 18:32:10.313184 2484 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1671219b-d50a-40ca-b58b-ad54be33e035" containerName="mount-bpf-fs" Jun 25 18:32:10.313396 kubelet[2484]: I0625 18:32:10.313204 2484 memory_manager.go:346] "RemoveStaleState removing state" podUID="dec21266-f02e-4653-86db-ab1e4352f453" containerName="cilium-operator" Jun 25 18:32:10.313396 kubelet[2484]: I0625 18:32:10.313229 2484 memory_manager.go:346] "RemoveStaleState removing state" podUID="1671219b-d50a-40ca-b58b-ad54be33e035" containerName="cilium-agent" Jun 25 18:32:10.320350 systemd[1]: Started sshd@24-10.0.0.73:22-10.0.0.1:53772.service - OpenSSH per-connection server daemon (10.0.0.1:53772). Jun 25 18:32:10.324659 systemd-logind[1424]: Removed session 24. Jun 25 18:32:10.340800 systemd[1]: Created slice kubepods-burstable-pod0df2743c_4eed_4610_ae27_60feb0ff62a9.slice - libcontainer container kubepods-burstable-pod0df2743c_4eed_4610_ae27_60feb0ff62a9.slice. Jun 25 18:32:10.356586 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 53772 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:32:10.358352 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:10.364629 systemd-logind[1424]: New session 25 of user core. Jun 25 18:32:10.374916 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:32:10.428277 sshd[4292]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:10.438040 systemd[1]: sshd@24-10.0.0.73:22-10.0.0.1:53772.service: Deactivated successfully. Jun 25 18:32:10.439871 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:32:10.441463 systemd-logind[1424]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:32:10.446021 systemd[1]: Started sshd@25-10.0.0.73:22-10.0.0.1:53776.service - OpenSSH per-connection server daemon (10.0.0.1:53776). Jun 25 18:32:10.447649 systemd-logind[1424]: Removed session 25. Jun 25 18:32:10.456845 kubelet[2484]: I0625 18:32:10.456669 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-cilium-run\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.456845 kubelet[2484]: I0625 18:32:10.456715 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-cilium-cgroup\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.456845 kubelet[2484]: I0625 18:32:10.456735 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0df2743c-4eed-4610-ae27-60feb0ff62a9-cilium-ipsec-secrets\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.456845 kubelet[2484]: I0625 18:32:10.456768 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0df2743c-4eed-4610-ae27-60feb0ff62a9-hubble-tls\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.456845 kubelet[2484]: I0625 18:32:10.456795 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8m82\" (UniqueName: \"kubernetes.io/projected/0df2743c-4eed-4610-ae27-60feb0ff62a9-kube-api-access-r8m82\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.456845 kubelet[2484]: I0625 18:32:10.456822 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-bpf-maps\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457029 kubelet[2484]: I0625 18:32:10.456880 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-etc-cni-netd\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457029 kubelet[2484]: I0625 18:32:10.456915 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-lib-modules\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457029 kubelet[2484]: I0625 18:32:10.456936 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-cni-path\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457029 kubelet[2484]: I0625 18:32:10.456957 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-host-proc-sys-net\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457029 kubelet[2484]: I0625 18:32:10.456995 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-host-proc-sys-kernel\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457029 kubelet[2484]: I0625 18:32:10.457015 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-hostproc\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457171 kubelet[2484]: I0625 18:32:10.457045 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df2743c-4eed-4610-ae27-60feb0ff62a9-xtables-lock\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457171 kubelet[2484]: I0625 18:32:10.457064 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0df2743c-4eed-4610-ae27-60feb0ff62a9-clustermesh-secrets\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.457171 kubelet[2484]: I0625 18:32:10.457082 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0df2743c-4eed-4610-ae27-60feb0ff62a9-cilium-config-path\") pod \"cilium-gmb8q\" (UID: \"0df2743c-4eed-4610-ae27-60feb0ff62a9\") " pod="kube-system/cilium-gmb8q" Jun 25 18:32:10.475090 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 53776 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:32:10.476265 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:10.479777 systemd-logind[1424]: New session 26 of user core. Jun 25 18:32:10.490902 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:32:10.649252 kubelet[2484]: E0625 18:32:10.649190 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:10.649728 containerd[1444]: time="2024-06-25T18:32:10.649686068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gmb8q,Uid:0df2743c-4eed-4610-ae27-60feb0ff62a9,Namespace:kube-system,Attempt:0,}" Jun 25 18:32:10.666229 containerd[1444]: time="2024-06-25T18:32:10.666133831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:32:10.666229 containerd[1444]: time="2024-06-25T18:32:10.666195231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:10.666229 containerd[1444]: time="2024-06-25T18:32:10.666215551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:32:10.666229 containerd[1444]: time="2024-06-25T18:32:10.666226671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:32:10.685931 systemd[1]: Started cri-containerd-6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c.scope - libcontainer container 6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c. Jun 25 18:32:10.708445 containerd[1444]: time="2024-06-25T18:32:10.708403324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gmb8q,Uid:0df2743c-4eed-4610-ae27-60feb0ff62a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\"" Jun 25 18:32:10.709800 kubelet[2484]: E0625 18:32:10.709742 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:10.715449 containerd[1444]: time="2024-06-25T18:32:10.715405919Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:32:10.726432 containerd[1444]: time="2024-06-25T18:32:10.726386734Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d0c2131fc79863e3cb02a807a5d01009d0ce80d246b16e54a55cb7d4440d1e7\"" Jun 25 18:32:10.726958 containerd[1444]: time="2024-06-25T18:32:10.726921137Z" level=info msg="StartContainer for \"1d0c2131fc79863e3cb02a807a5d01009d0ce80d246b16e54a55cb7d4440d1e7\"" Jun 25 18:32:10.749900 systemd[1]: Started cri-containerd-1d0c2131fc79863e3cb02a807a5d01009d0ce80d246b16e54a55cb7d4440d1e7.scope - libcontainer container 1d0c2131fc79863e3cb02a807a5d01009d0ce80d246b16e54a55cb7d4440d1e7. Jun 25 18:32:10.771869 containerd[1444]: time="2024-06-25T18:32:10.771817843Z" level=info msg="StartContainer for \"1d0c2131fc79863e3cb02a807a5d01009d0ce80d246b16e54a55cb7d4440d1e7\" returns successfully" Jun 25 18:32:10.814220 systemd[1]: cri-containerd-1d0c2131fc79863e3cb02a807a5d01009d0ce80d246b16e54a55cb7d4440d1e7.scope: Deactivated successfully. Jun 25 18:32:10.840129 containerd[1444]: time="2024-06-25T18:32:10.840076067Z" level=info msg="shim disconnected" id=1d0c2131fc79863e3cb02a807a5d01009d0ce80d246b16e54a55cb7d4440d1e7 namespace=k8s.io Jun 25 18:32:10.840129 containerd[1444]: time="2024-06-25T18:32:10.840132347Z" level=warning msg="cleaning up after shim disconnected" id=1d0c2131fc79863e3cb02a807a5d01009d0ce80d246b16e54a55cb7d4440d1e7 namespace=k8s.io Jun 25 18:32:10.840360 containerd[1444]: time="2024-06-25T18:32:10.840141627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:11.520520 kubelet[2484]: E0625 18:32:11.520430 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:11.523982 containerd[1444]: time="2024-06-25T18:32:11.523833727Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:32:11.537088 containerd[1444]: time="2024-06-25T18:32:11.537035592Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd\"" Jun 25 18:32:11.537563 containerd[1444]: time="2024-06-25T18:32:11.537537354Z" level=info msg="StartContainer for \"ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd\"" Jun 25 18:32:11.568910 systemd[1]: Started cri-containerd-ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd.scope - libcontainer container ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd. Jun 25 18:32:11.602589 containerd[1444]: time="2024-06-25T18:32:11.602400153Z" level=info msg="StartContainer for \"ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd\" returns successfully" Jun 25 18:32:11.607941 systemd[1]: cri-containerd-ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd.scope: Deactivated successfully. Jun 25 18:32:11.624333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd-rootfs.mount: Deactivated successfully. Jun 25 18:32:11.628797 containerd[1444]: time="2024-06-25T18:32:11.628673122Z" level=info msg="shim disconnected" id=ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd namespace=k8s.io Jun 25 18:32:11.628797 containerd[1444]: time="2024-06-25T18:32:11.628724883Z" level=warning msg="cleaning up after shim disconnected" id=ae75d33c55134940b83f0bbcbef90650c622490ef835adccbec65ed58b5702dd namespace=k8s.io Jun 25 18:32:11.628797 containerd[1444]: time="2024-06-25T18:32:11.628733683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:12.525318 kubelet[2484]: E0625 18:32:12.524857 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:12.528560 containerd[1444]: time="2024-06-25T18:32:12.528472565Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:32:12.544484 containerd[1444]: time="2024-06-25T18:32:12.544426641Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8\"" Jun 25 18:32:12.545056 containerd[1444]: time="2024-06-25T18:32:12.545020604Z" level=info msg="StartContainer for \"33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8\"" Jun 25 18:32:12.574011 systemd[1]: Started cri-containerd-33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8.scope - libcontainer container 33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8. Jun 25 18:32:12.598922 systemd[1]: cri-containerd-33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8.scope: Deactivated successfully. Jun 25 18:32:12.607480 containerd[1444]: time="2024-06-25T18:32:12.606830141Z" level=info msg="StartContainer for \"33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8\" returns successfully" Jun 25 18:32:12.639622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8-rootfs.mount: Deactivated successfully. Jun 25 18:32:12.644908 containerd[1444]: time="2024-06-25T18:32:12.644843084Z" level=info msg="shim disconnected" id=33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8 namespace=k8s.io Jun 25 18:32:12.644908 containerd[1444]: time="2024-06-25T18:32:12.644901324Z" level=warning msg="cleaning up after shim disconnected" id=33801b963005f1145a9b640275c500b48659d26126ce1fd67a3e809402232fd8 namespace=k8s.io Jun 25 18:32:12.644908 containerd[1444]: time="2024-06-25T18:32:12.644911004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:13.528326 kubelet[2484]: E0625 18:32:13.528293 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:13.532387 containerd[1444]: time="2024-06-25T18:32:13.532043964Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:32:13.574095 containerd[1444]: time="2024-06-25T18:32:13.574048841Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1\"" Jun 25 18:32:13.574815 containerd[1444]: time="2024-06-25T18:32:13.574737324Z" level=info msg="StartContainer for \"9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1\"" Jun 25 18:32:13.603255 systemd[1]: Started cri-containerd-9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1.scope - libcontainer container 9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1. Jun 25 18:32:13.625661 systemd[1]: cri-containerd-9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1.scope: Deactivated successfully. Jun 25 18:32:13.632056 containerd[1444]: time="2024-06-25T18:32:13.631986232Z" level=info msg="StartContainer for \"9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1\" returns successfully" Jun 25 18:32:13.647637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1-rootfs.mount: Deactivated successfully. Jun 25 18:32:13.653694 containerd[1444]: time="2024-06-25T18:32:13.653640734Z" level=info msg="shim disconnected" id=9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1 namespace=k8s.io Jun 25 18:32:13.654250 containerd[1444]: time="2024-06-25T18:32:13.654041736Z" level=warning msg="cleaning up after shim disconnected" id=9a358066001534fe5aebcdf2da557df8e125978d140e04b71821f98b1355baf1 namespace=k8s.io Jun 25 18:32:13.654250 containerd[1444]: time="2024-06-25T18:32:13.654083736Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:32:14.364333 kubelet[2484]: E0625 18:32:14.364300 2484 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:32:14.533781 kubelet[2484]: E0625 18:32:14.533506 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:14.535923 containerd[1444]: time="2024-06-25T18:32:14.535815773Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:32:14.552325 containerd[1444]: time="2024-06-25T18:32:14.552280928Z" level=info msg="CreateContainer within sandbox \"6d81027929864ad5d88f87667fbcc0bd232038f2093f7c80b3d388620919de1c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec30b90b02a3dfb1e12c7493a88048d05b397bc093eacbb3e2d75a7a072c9729\"" Jun 25 18:32:14.554002 containerd[1444]: time="2024-06-25T18:32:14.553957616Z" level=info msg="StartContainer for \"ec30b90b02a3dfb1e12c7493a88048d05b397bc093eacbb3e2d75a7a072c9729\"" Jun 25 18:32:14.582904 systemd[1]: Started cri-containerd-ec30b90b02a3dfb1e12c7493a88048d05b397bc093eacbb3e2d75a7a072c9729.scope - libcontainer container ec30b90b02a3dfb1e12c7493a88048d05b397bc093eacbb3e2d75a7a072c9729. Jun 25 18:32:14.614792 containerd[1444]: time="2024-06-25T18:32:14.614168292Z" level=info msg="StartContainer for \"ec30b90b02a3dfb1e12c7493a88048d05b397bc093eacbb3e2d75a7a072c9729\" returns successfully" Jun 25 18:32:14.907877 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 25 18:32:15.090521 kernel: hrtimer: interrupt took 4240099 ns Jun 25 18:32:15.542298 kubelet[2484]: E0625 18:32:15.542213 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:16.650565 kubelet[2484]: E0625 18:32:16.650520 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:16.798107 systemd[1]: run-containerd-runc-k8s.io-ec30b90b02a3dfb1e12c7493a88048d05b397bc093eacbb3e2d75a7a072c9729-runc.hKDN7d.mount: Deactivated successfully. Jun 25 18:32:17.773747 systemd-networkd[1378]: lxc_health: Link UP Jun 25 18:32:17.779077 systemd-networkd[1378]: lxc_health: Gained carrier Jun 25 18:32:18.652174 kubelet[2484]: E0625 18:32:18.652143 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:18.667348 kubelet[2484]: I0625 18:32:18.667316 2484 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gmb8q" podStartSLOduration=8.667278139 podCreationTimestamp="2024-06-25 18:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:32:15.561794214 +0000 UTC m=+86.373156764" watchObservedRunningTime="2024-06-25 18:32:18.667278139 +0000 UTC m=+89.478640689" Jun 25 18:32:18.916500 systemd[1]: run-containerd-runc-k8s.io-ec30b90b02a3dfb1e12c7493a88048d05b397bc093eacbb3e2d75a7a072c9729-runc.w98RPR.mount: Deactivated successfully. Jun 25 18:32:19.514032 systemd-networkd[1378]: lxc_health: Gained IPv6LL Jun 25 18:32:19.550452 kubelet[2484]: E0625 18:32:19.550417 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:20.552025 kubelet[2484]: E0625 18:32:20.551666 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:21.038937 systemd[1]: run-containerd-runc-k8s.io-ec30b90b02a3dfb1e12c7493a88048d05b397bc093eacbb3e2d75a7a072c9729-runc.2sAQF1.mount: Deactivated successfully. Jun 25 18:32:21.287647 kubelet[2484]: E0625 18:32:21.285845 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:32:23.234057 sshd[4300]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:23.236521 systemd[1]: sshd@25-10.0.0.73:22-10.0.0.1:53776.service: Deactivated successfully. Jun 25 18:32:23.238387 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:32:23.240124 systemd-logind[1424]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:32:23.241049 systemd-logind[1424]: Removed session 26.