Jun 25 18:33:44.922501 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 18:33:44.922521 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:33:44.922530 kernel: KASLR enabled Jun 25 18:33:44.922536 kernel: efi: EFI v2.7 by EDK II Jun 25 18:33:44.922541 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jun 25 18:33:44.922547 kernel: random: crng init done Jun 25 18:33:44.922554 kernel: ACPI: Early table checksum verification disabled Jun 25 18:33:44.922560 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jun 25 18:33:44.922566 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 25 18:33:44.922573 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922593 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922599 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922605 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922611 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922618 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922627 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922633 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922639 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:33:44.922646 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 25 18:33:44.922652 kernel: NUMA: Failed to initialise from firmware Jun 25 18:33:44.922658 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:33:44.922665 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jun 25 18:33:44.922671 kernel: Zone ranges: Jun 25 18:33:44.922677 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:33:44.922683 kernel: DMA32 empty Jun 25 18:33:44.922691 kernel: Normal empty Jun 25 18:33:44.922697 kernel: Movable zone start for each node Jun 25 18:33:44.922703 kernel: Early memory node ranges Jun 25 18:33:44.922709 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jun 25 18:33:44.922716 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jun 25 18:33:44.922727 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jun 25 18:33:44.922734 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jun 25 18:33:44.922740 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jun 25 18:33:44.922747 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jun 25 18:33:44.922753 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jun 25 18:33:44.922760 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 18:33:44.922766 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 25 18:33:44.922773 kernel: psci: probing for conduit method from ACPI. Jun 25 18:33:44.922780 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 18:33:44.922786 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:33:44.922795 kernel: psci: Trusted OS migration not required Jun 25 18:33:44.922801 kernel: psci: SMC Calling Convention v1.1 Jun 25 18:33:44.922808 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 25 18:33:44.922816 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:33:44.922823 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:33:44.922830 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 25 18:33:44.922836 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:33:44.922843 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:33:44.922850 kernel: CPU features: detected: Hardware dirty bit management Jun 25 18:33:44.922857 kernel: CPU features: detected: Spectre-v4 Jun 25 18:33:44.922863 kernel: CPU features: detected: Spectre-BHB Jun 25 18:33:44.922870 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 18:33:44.922877 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 18:33:44.922885 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 18:33:44.922892 kernel: alternatives: applying boot alternatives Jun 25 18:33:44.922899 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:33:44.922906 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:33:44.922913 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:33:44.922920 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:33:44.922930 kernel: Fallback order for Node 0: 0 Jun 25 18:33:44.922939 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jun 25 18:33:44.922947 kernel: Policy zone: DMA Jun 25 18:33:44.922954 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:33:44.922960 kernel: software IO TLB: area num 4. Jun 25 18:33:44.922968 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jun 25 18:33:44.922975 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jun 25 18:33:44.922982 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:33:44.922989 kernel: trace event string verifier disabled Jun 25 18:33:44.922995 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:33:44.923002 kernel: rcu: RCU event tracing is enabled. Jun 25 18:33:44.923009 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:33:44.923016 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:33:44.923023 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:33:44.923029 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:33:44.923036 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:33:44.923043 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:33:44.923051 kernel: GICv3: 256 SPIs implemented Jun 25 18:33:44.923058 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:33:44.923064 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:33:44.923071 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 18:33:44.923078 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 25 18:33:44.923084 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 25 18:33:44.923091 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 18:33:44.923098 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jun 25 18:33:44.923104 kernel: GICv3: using LPI property table @0x00000000400f0000 Jun 25 18:33:44.923112 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jun 25 18:33:44.923118 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:33:44.923127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:33:44.923134 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 18:33:44.923140 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 18:33:44.923147 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 18:33:44.923154 kernel: arm-pv: using stolen time PV Jun 25 18:33:44.923161 kernel: Console: colour dummy device 80x25 Jun 25 18:33:44.923168 kernel: ACPI: Core revision 20230628 Jun 25 18:33:44.923175 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 18:33:44.923182 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:33:44.923189 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:33:44.923197 kernel: SELinux: Initializing. Jun 25 18:33:44.923204 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:33:44.923211 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:33:44.923218 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:33:44.923225 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:33:44.923232 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:33:44.923239 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:33:44.923246 kernel: Platform MSI: ITS@0x8080000 domain created Jun 25 18:33:44.923253 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 25 18:33:44.923261 kernel: Remapping and enabling EFI services. Jun 25 18:33:44.923268 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:33:44.923274 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:33:44.923281 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 25 18:33:44.923288 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jun 25 18:33:44.923295 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:33:44.923302 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 18:33:44.923309 kernel: Detected PIPT I-cache on CPU2 Jun 25 18:33:44.923316 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 25 18:33:44.923323 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jun 25 18:33:44.923332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:33:44.923339 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 25 18:33:44.923350 kernel: Detected PIPT I-cache on CPU3 Jun 25 18:33:44.923359 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 25 18:33:44.923366 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jun 25 18:33:44.923374 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:33:44.923381 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 25 18:33:44.923388 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:33:44.923395 kernel: SMP: Total of 4 processors activated. Jun 25 18:33:44.923404 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:33:44.923411 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 18:33:44.923418 kernel: CPU features: detected: Common not Private translations Jun 25 18:33:44.923425 kernel: CPU features: detected: CRC32 instructions Jun 25 18:33:44.923433 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 25 18:33:44.923440 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 18:33:44.923447 kernel: CPU features: detected: LSE atomic instructions Jun 25 18:33:44.923454 kernel: CPU features: detected: Privileged Access Never Jun 25 18:33:44.923463 kernel: CPU features: detected: RAS Extension Support Jun 25 18:33:44.923470 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 25 18:33:44.923477 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:33:44.923485 kernel: alternatives: applying system-wide alternatives Jun 25 18:33:44.923492 kernel: devtmpfs: initialized Jun 25 18:33:44.923499 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:33:44.923506 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:33:44.923514 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:33:44.923521 kernel: SMBIOS 3.0.0 present. Jun 25 18:33:44.923530 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jun 25 18:33:44.923537 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:33:44.923544 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:33:44.923551 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:33:44.923559 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:33:44.923566 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:33:44.923573 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jun 25 18:33:44.923586 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:33:44.923593 kernel: cpuidle: using governor menu Jun 25 18:33:44.923602 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:33:44.923609 kernel: ASID allocator initialised with 32768 entries Jun 25 18:33:44.923616 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:33:44.923624 kernel: Serial: AMBA PL011 UART driver Jun 25 18:33:44.923631 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 25 18:33:44.923638 kernel: Modules: 0 pages in range for non-PLT usage Jun 25 18:33:44.923645 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:33:44.923652 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:33:44.923660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:33:44.923669 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:33:44.923676 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:33:44.923683 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:33:44.923690 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:33:44.923697 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:33:44.923705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:33:44.923712 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:33:44.923719 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:33:44.923730 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:33:44.923739 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:33:44.923746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:33:44.923753 kernel: ACPI: Interpreter enabled Jun 25 18:33:44.923761 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:33:44.923768 kernel: ACPI: MCFG table detected, 1 entries Jun 25 18:33:44.923776 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 25 18:33:44.923783 kernel: printk: console [ttyAMA0] enabled Jun 25 18:33:44.923790 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:33:44.923926 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:33:44.924002 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 18:33:44.924068 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 18:33:44.924131 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 25 18:33:44.924194 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 25 18:33:44.924204 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 25 18:33:44.924211 kernel: PCI host bridge to bus 0000:00 Jun 25 18:33:44.924282 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 25 18:33:44.924343 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 18:33:44.924401 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 25 18:33:44.924458 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:33:44.924536 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 25 18:33:44.924696 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:33:44.924776 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jun 25 18:33:44.924847 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jun 25 18:33:44.924911 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 18:33:44.924977 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 18:33:44.925043 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jun 25 18:33:44.925109 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jun 25 18:33:44.925171 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 25 18:33:44.925229 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 18:33:44.925290 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 25 18:33:44.925299 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 18:33:44.925307 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 18:33:44.925314 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 18:33:44.925322 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 18:33:44.925329 kernel: iommu: Default domain type: Translated Jun 25 18:33:44.925336 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:33:44.925344 kernel: efivars: Registered efivars operations Jun 25 18:33:44.925351 kernel: vgaarb: loaded Jun 25 18:33:44.925360 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:33:44.925367 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:33:44.925375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:33:44.925382 kernel: pnp: PnP ACPI init Jun 25 18:33:44.925458 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 25 18:33:44.925469 kernel: pnp: PnP ACPI: found 1 devices Jun 25 18:33:44.925476 kernel: NET: Registered PF_INET protocol family Jun 25 18:33:44.925483 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:33:44.925492 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:33:44.925500 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:33:44.925507 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:33:44.925514 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:33:44.925522 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:33:44.925529 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:33:44.925537 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:33:44.925544 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:33:44.925551 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:33:44.925560 kernel: kvm [1]: HYP mode not available Jun 25 18:33:44.925567 kernel: Initialise system trusted keyrings Jun 25 18:33:44.925574 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:33:44.925590 kernel: Key type asymmetric registered Jun 25 18:33:44.925598 kernel: Asymmetric key parser 'x509' registered Jun 25 18:33:44.925605 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:33:44.925612 kernel: io scheduler mq-deadline registered Jun 25 18:33:44.925620 kernel: io scheduler kyber registered Jun 25 18:33:44.925628 kernel: io scheduler bfq registered Jun 25 18:33:44.925637 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 18:33:44.925644 kernel: ACPI: button: Power Button [PWRB] Jun 25 18:33:44.925652 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 18:33:44.925720 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 25 18:33:44.925737 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:33:44.925744 kernel: thunder_xcv, ver 1.0 Jun 25 18:33:44.925751 kernel: thunder_bgx, ver 1.0 Jun 25 18:33:44.925759 kernel: nicpf, ver 1.0 Jun 25 18:33:44.925766 kernel: nicvf, ver 1.0 Jun 25 18:33:44.925846 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:33:44.925910 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:33:44 UTC (1719340424) Jun 25 18:33:44.925919 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:33:44.925927 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 25 18:33:44.925934 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:33:44.925942 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:33:44.925949 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:33:44.925956 kernel: Segment Routing with IPv6 Jun 25 18:33:44.925966 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:33:44.925973 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:33:44.925980 kernel: Key type dns_resolver registered Jun 25 18:33:44.925987 kernel: registered taskstats version 1 Jun 25 18:33:44.925994 kernel: Loading compiled-in X.509 certificates Jun 25 18:33:44.926002 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:33:44.926009 kernel: Key type .fscrypt registered Jun 25 18:33:44.926016 kernel: Key type fscrypt-provisioning registered Jun 25 18:33:44.926023 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:33:44.926032 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:33:44.926039 kernel: ima: No architecture policies found Jun 25 18:33:44.926046 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:33:44.926054 kernel: clk: Disabling unused clocks Jun 25 18:33:44.926061 kernel: Freeing unused kernel memory: 39040K Jun 25 18:33:44.926068 kernel: Run /init as init process Jun 25 18:33:44.926075 kernel: with arguments: Jun 25 18:33:44.926083 kernel: /init Jun 25 18:33:44.926090 kernel: with environment: Jun 25 18:33:44.926098 kernel: HOME=/ Jun 25 18:33:44.926105 kernel: TERM=linux Jun 25 18:33:44.926113 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:33:44.926121 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:33:44.926131 systemd[1]: Detected virtualization kvm. Jun 25 18:33:44.926139 systemd[1]: Detected architecture arm64. Jun 25 18:33:44.926146 systemd[1]: Running in initrd. Jun 25 18:33:44.926154 systemd[1]: No hostname configured, using default hostname. Jun 25 18:33:44.926163 systemd[1]: Hostname set to . Jun 25 18:33:44.926171 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:33:44.926179 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:33:44.926187 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:33:44.926194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:33:44.926203 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:33:44.926211 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:33:44.926220 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:33:44.926228 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:33:44.926238 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:33:44.926246 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:33:44.926253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:33:44.926261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:33:44.926269 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:33:44.926278 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:33:44.926286 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:33:44.926293 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:33:44.926301 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:33:44.926309 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:33:44.926317 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:33:44.926325 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:33:44.926332 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:33:44.926340 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:33:44.926350 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:33:44.926358 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:33:44.926366 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:33:44.926373 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:33:44.926381 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:33:44.926389 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:33:44.926397 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:33:44.926405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:33:44.926412 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:33:44.926422 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:33:44.926430 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:33:44.926438 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:33:44.926446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:33:44.926456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:33:44.926480 systemd-journald[238]: Collecting audit messages is disabled. Jun 25 18:33:44.926500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:33:44.926509 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:33:44.926520 systemd-journald[238]: Journal started Jun 25 18:33:44.926538 systemd-journald[238]: Runtime Journal (/run/log/journal/295b72adbf83439aacc08c063674f64e) is 5.9M, max 47.3M, 41.4M free. Jun 25 18:33:44.917868 systemd-modules-load[239]: Inserted module 'overlay' Jun 25 18:33:44.929609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:33:44.932599 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:33:44.932638 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:33:44.935039 systemd-modules-load[239]: Inserted module 'br_netfilter' Jun 25 18:33:44.935813 kernel: Bridge firewalling registered Jun 25 18:33:44.936182 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:33:44.938468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:33:44.940344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:33:44.942709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:33:44.946429 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:33:44.949406 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:33:44.950413 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:33:44.952022 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:33:44.956139 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:33:44.967976 dracut-cmdline[269]: dracut-dracut-053 Jun 25 18:33:44.972962 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:33:44.981089 systemd-resolved[274]: Positive Trust Anchors: Jun 25 18:33:44.981108 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:33:44.981138 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:33:44.985808 systemd-resolved[274]: Defaulting to hostname 'linux'. Jun 25 18:33:44.987809 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:33:44.988731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:33:45.044617 kernel: SCSI subsystem initialized Jun 25 18:33:45.048605 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:33:45.056614 kernel: iscsi: registered transport (tcp) Jun 25 18:33:45.068828 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:33:45.068853 kernel: QLogic iSCSI HBA Driver Jun 25 18:33:45.109169 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:33:45.118743 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:33:45.138525 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:33:45.138586 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:33:45.139606 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:33:45.194973 kernel: raid6: neonx8 gen() 15773 MB/s Jun 25 18:33:45.212449 kernel: raid6: neonx4 gen() 15282 MB/s Jun 25 18:33:45.228620 kernel: raid6: neonx2 gen() 13204 MB/s Jun 25 18:33:45.245616 kernel: raid6: neonx1 gen() 10461 MB/s Jun 25 18:33:45.262605 kernel: raid6: int64x8 gen() 6953 MB/s Jun 25 18:33:45.279611 kernel: raid6: int64x4 gen() 7331 MB/s Jun 25 18:33:45.296613 kernel: raid6: int64x2 gen() 6125 MB/s Jun 25 18:33:45.313622 kernel: raid6: int64x1 gen() 5055 MB/s Jun 25 18:33:45.313678 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s Jun 25 18:33:45.330605 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Jun 25 18:33:45.330619 kernel: raid6: using neon recovery algorithm Jun 25 18:33:45.335601 kernel: xor: measuring software checksum speed Jun 25 18:33:45.336597 kernel: 8regs : 19730 MB/sec Jun 25 18:33:45.337764 kernel: 32regs : 19697 MB/sec Jun 25 18:33:45.337783 kernel: arm64_neon : 27179 MB/sec Jun 25 18:33:45.337799 kernel: xor: using function: arm64_neon (27179 MB/sec) Jun 25 18:33:45.397944 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:33:45.411298 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:33:45.422727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:33:45.443933 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jun 25 18:33:45.447498 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:33:45.451147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:33:45.467504 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jun 25 18:33:45.497101 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:33:45.509614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:33:45.550068 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:33:45.563779 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:33:45.578883 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:33:45.580459 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:33:45.582701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:33:45.585661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:33:45.591740 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:33:45.598617 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 25 18:33:45.606592 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:33:45.606899 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:33:45.606912 kernel: GPT:9289727 != 19775487 Jun 25 18:33:45.606930 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:33:45.606940 kernel: GPT:9289727 != 19775487 Jun 25 18:33:45.606949 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:33:45.606960 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:33:45.600882 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:33:45.601018 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:33:45.605135 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:33:45.608835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:33:45.608970 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:33:45.610541 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:33:45.622868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:33:45.624098 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:33:45.635789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:33:45.645118 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (523) Jun 25 18:33:45.646350 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:33:45.649602 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Jun 25 18:33:45.660429 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:33:45.664687 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:33:45.668134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:33:45.669055 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:33:45.672344 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:33:45.678137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:33:45.690752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:33:45.697780 disk-uuid[563]: Primary Header is updated. Jun 25 18:33:45.697780 disk-uuid[563]: Secondary Entries is updated. Jun 25 18:33:45.697780 disk-uuid[563]: Secondary Header is updated. Jun 25 18:33:45.701606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:33:46.716613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:33:46.717297 disk-uuid[564]: The operation has completed successfully. Jun 25 18:33:46.760201 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:33:46.760291 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:33:46.776792 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:33:46.780442 sh[579]: Success Jun 25 18:33:46.799108 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:33:46.843937 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:33:46.856831 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:33:46.861779 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:33:46.875448 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:33:46.875501 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:33:46.875521 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:33:46.875531 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:33:46.876572 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:33:46.890606 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:33:46.891503 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:33:46.902289 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:33:46.903771 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:33:46.912183 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:33:46.912226 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:33:46.912237 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:33:46.917747 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:33:46.925340 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:33:46.926643 kernel: BTRFS info (device vda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:33:46.933838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:33:46.940758 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:33:47.011840 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:33:47.023772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:33:47.043014 ignition[674]: Ignition 2.19.0 Jun 25 18:33:47.043104 ignition[674]: Stage: fetch-offline Jun 25 18:33:47.043161 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:33:47.043171 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:33:47.043294 ignition[674]: parsed url from cmdline: "" Jun 25 18:33:47.043298 ignition[674]: no config URL provided Jun 25 18:33:47.043303 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:33:47.043311 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:33:47.043339 ignition[674]: op(1): [started] loading QEMU firmware config module Jun 25 18:33:47.043344 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:33:47.050990 systemd-networkd[772]: lo: Link UP Jun 25 18:33:47.051001 systemd-networkd[772]: lo: Gained carrier Jun 25 18:33:47.051885 systemd-networkd[772]: Enumeration completed Jun 25 18:33:47.052042 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:33:47.052354 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:33:47.052357 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:33:47.053451 systemd-networkd[772]: eth0: Link UP Jun 25 18:33:47.053454 systemd-networkd[772]: eth0: Gained carrier Jun 25 18:33:47.053460 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:33:47.053900 systemd[1]: Reached target network.target - Network. Jun 25 18:33:47.061128 ignition[674]: op(1): [finished] loading QEMU firmware config module Jun 25 18:33:47.074619 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:33:47.101204 ignition[674]: parsing config with SHA512: d14e11ae68a191d32f88b04ce07fcd9bf0a52865621bffb78c6bed91b38a3fa3c0eb3faa001b7dd794985f6391e67955aa3e4c7022a8a4ae1e58e49f803f36e4 Jun 25 18:33:47.105657 unknown[674]: fetched base config from "system" Jun 25 18:33:47.105669 unknown[674]: fetched user config from "qemu" Jun 25 18:33:47.106122 ignition[674]: fetch-offline: fetch-offline passed Jun 25 18:33:47.106179 ignition[674]: Ignition finished successfully Jun 25 18:33:47.108972 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:33:47.110414 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:33:47.117870 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:33:47.128798 ignition[778]: Ignition 2.19.0 Jun 25 18:33:47.128808 ignition[778]: Stage: kargs Jun 25 18:33:47.128958 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:33:47.128968 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:33:47.129834 ignition[778]: kargs: kargs passed Jun 25 18:33:47.129875 ignition[778]: Ignition finished successfully Jun 25 18:33:47.132555 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:33:47.135281 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:33:47.147409 ignition[786]: Ignition 2.19.0 Jun 25 18:33:47.147418 ignition[786]: Stage: disks Jun 25 18:33:47.147568 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:33:47.147577 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:33:47.149660 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:33:47.148408 ignition[786]: disks: disks passed Jun 25 18:33:47.151222 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:33:47.148449 ignition[786]: Ignition finished successfully Jun 25 18:33:47.152315 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:33:47.153515 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:33:47.155037 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:33:47.156164 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:33:47.171809 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:33:47.183930 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:33:47.188899 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:33:47.199686 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:33:47.243228 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:33:47.244672 kernel: EXT4-fs (vda9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:33:47.244334 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:33:47.253742 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:33:47.255966 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:33:47.256836 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:33:47.256872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:33:47.256893 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:33:47.262059 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:33:47.263677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:33:47.272270 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Jun 25 18:33:47.272306 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:33:47.272318 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:33:47.273599 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:33:47.282599 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:33:47.283710 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:33:47.324294 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:33:47.327970 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:33:47.332842 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:33:47.336616 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:33:47.420269 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:33:47.429732 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:33:47.432113 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:33:47.437620 kernel: BTRFS info (device vda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:33:47.455934 ignition[919]: INFO : Ignition 2.19.0 Jun 25 18:33:47.455934 ignition[919]: INFO : Stage: mount Jun 25 18:33:47.457283 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:33:47.457283 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:33:47.457283 ignition[919]: INFO : mount: mount passed Jun 25 18:33:47.457283 ignition[919]: INFO : Ignition finished successfully Jun 25 18:33:47.458796 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:33:47.473749 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:33:47.474679 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:33:47.873127 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:33:47.881769 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:33:47.890381 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Jun 25 18:33:47.893040 kernel: BTRFS info (device vda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:33:47.893077 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:33:47.893088 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:33:47.897612 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:33:47.898689 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:33:47.916152 ignition[948]: INFO : Ignition 2.19.0 Jun 25 18:33:47.916152 ignition[948]: INFO : Stage: files Jun 25 18:33:47.917490 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:33:47.917490 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:33:47.917490 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:33:47.920446 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:33:47.920446 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:33:47.922457 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:33:47.922457 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:33:47.922457 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:33:47.921816 unknown[948]: wrote ssh authorized keys file for user: core Jun 25 18:33:47.926499 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:33:47.926499 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:33:48.180941 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:33:48.199058 systemd-networkd[772]: eth0: Gained IPv6LL Jun 25 18:33:48.244066 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:33:48.245725 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:33:48.245725 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 25 18:33:48.606927 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:33:48.655642 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:33:48.657163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jun 25 18:33:48.889162 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:33:49.134396 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 18:33:49.134396 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 18:33:49.137767 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:33:49.137767 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:33:49.137767 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 18:33:49.137767 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 25 18:33:49.137767 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:33:49.137767 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:33:49.137767 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 25 18:33:49.137767 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:33:49.156313 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:33:49.160345 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:33:49.161721 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:33:49.161721 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:33:49.161721 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:33:49.161721 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:33:49.161721 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:33:49.161721 ignition[948]: INFO : files: files passed Jun 25 18:33:49.161721 ignition[948]: INFO : Ignition finished successfully Jun 25 18:33:49.162654 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:33:49.172723 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:33:49.175934 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:33:49.178853 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:33:49.178954 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:33:49.181884 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:33:49.185190 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:33:49.185190 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:33:49.188059 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:33:49.187183 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:33:49.189046 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:33:49.191690 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:33:49.212784 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:33:49.212873 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:33:49.214974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:33:49.216194 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:33:49.217698 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:33:49.219757 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:33:49.233057 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:33:49.243783 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:33:49.251145 systemd[1]: Stopped target network.target - Network. Jun 25 18:33:49.251993 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:33:49.253347 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:33:49.254281 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:33:49.255755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:33:49.255860 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:33:49.258209 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:33:49.259118 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:33:49.260549 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:33:49.262020 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:33:49.263540 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:33:49.264993 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:33:49.266292 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:33:49.268074 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:33:49.269524 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:33:49.271083 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:33:49.272446 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:33:49.272549 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:33:49.274546 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:33:49.276022 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:33:49.277370 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:33:49.281651 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:33:49.282812 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:33:49.282918 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:33:49.285377 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:33:49.285498 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:33:49.287054 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:33:49.288294 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:33:49.292638 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:33:49.293910 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:33:49.295904 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:33:49.297209 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:33:49.297292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:33:49.298471 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:33:49.298547 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:33:49.299936 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:33:49.300039 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:33:49.301518 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:33:49.301636 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:33:49.312752 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:33:49.313536 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:33:49.313688 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:33:49.316393 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:33:49.317828 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:33:49.319344 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:33:49.321291 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:33:49.324649 ignition[1003]: INFO : Ignition 2.19.0 Jun 25 18:33:49.324649 ignition[1003]: INFO : Stage: umount Jun 25 18:33:49.324649 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:33:49.324649 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:33:49.324649 ignition[1003]: INFO : umount: umount passed Jun 25 18:33:49.324649 ignition[1003]: INFO : Ignition finished successfully Jun 25 18:33:49.321671 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:33:49.324845 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:33:49.324962 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:33:49.332613 systemd-networkd[772]: eth0: DHCPv6 lease lost Jun 25 18:33:49.333770 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:33:49.333893 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:33:49.336299 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:33:49.336416 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:33:49.338662 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:33:49.338824 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:33:49.341808 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:33:49.343267 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:33:49.343319 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:33:49.344421 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:33:49.344470 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:33:49.346205 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:33:49.346249 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:33:49.347660 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:33:49.347704 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:33:49.349353 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:33:49.349402 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:33:49.361689 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:33:49.362956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:33:49.363012 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:33:49.364761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:33:49.364806 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:33:49.366336 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:33:49.366380 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:33:49.367791 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:33:49.367828 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:33:49.369679 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:33:49.371993 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:33:49.372079 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:33:49.381517 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:33:49.381642 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:33:49.385186 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:33:49.385321 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:33:49.387282 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:33:49.387319 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:33:49.388958 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:33:49.388989 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:33:49.390463 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:33:49.390509 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:33:49.392988 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:33:49.393035 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:33:49.395546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:33:49.395613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:33:49.401736 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:33:49.402704 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:33:49.402763 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:33:49.404438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:33:49.404477 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:33:49.408388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:33:49.408484 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:33:49.411760 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:33:49.411848 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:33:49.413543 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:33:49.414523 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:33:49.414577 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:33:49.416775 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:33:49.426256 systemd[1]: Switching root. Jun 25 18:33:49.454430 systemd-journald[238]: Journal stopped Jun 25 18:33:50.188606 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jun 25 18:33:50.188661 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:33:50.188674 kernel: SELinux: policy capability open_perms=1 Jun 25 18:33:50.188686 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:33:50.188695 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:33:50.188715 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:33:50.188733 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:33:50.188743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:33:50.188753 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:33:50.188762 kernel: audit: type=1403 audit(1719340429.612:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:33:50.188778 systemd[1]: Successfully loaded SELinux policy in 31.863ms. Jun 25 18:33:50.188791 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.597ms. Jun 25 18:33:50.188803 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:33:50.188815 systemd[1]: Detected virtualization kvm. Jun 25 18:33:50.188825 systemd[1]: Detected architecture arm64. Jun 25 18:33:50.188835 systemd[1]: Detected first boot. Jun 25 18:33:50.188845 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:33:50.188856 zram_generator::config[1047]: No configuration found. Jun 25 18:33:50.188870 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:33:50.188881 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:33:50.188892 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:33:50.188915 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:33:50.188927 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:33:50.188938 kernel: hrtimer: interrupt took 5756240 ns Jun 25 18:33:50.188950 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:33:50.188960 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:33:50.188972 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:33:50.188984 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:33:50.188999 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:33:50.189009 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:33:50.189021 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:33:50.189032 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:33:50.189042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:33:50.189053 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:33:50.189064 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:33:50.189074 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:33:50.189087 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:33:50.189098 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 25 18:33:50.189112 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:33:50.189123 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:33:50.189133 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:33:50.189144 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:33:50.189154 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:33:50.189166 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:33:50.189178 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:33:50.189190 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:33:50.189200 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:33:50.189211 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:33:50.189222 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:33:50.189233 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:33:50.189244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:33:50.189255 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:33:50.189265 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:33:50.189278 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:33:50.189288 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:33:50.189299 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:33:50.189310 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:33:50.189321 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:33:50.189331 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:33:50.189342 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:33:50.189353 systemd[1]: Reached target machines.target - Containers. Jun 25 18:33:50.189365 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:33:50.189376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:33:50.189387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:33:50.189397 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:33:50.189408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:33:50.189419 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:33:50.189430 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:33:50.189440 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:33:50.189451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:33:50.189463 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:33:50.189473 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:33:50.189484 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:33:50.189495 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:33:50.189505 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:33:50.189515 kernel: fuse: init (API version 7.39) Jun 25 18:33:50.189525 kernel: loop: module loaded Jun 25 18:33:50.189535 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:33:50.189547 kernel: ACPI: bus type drm_connector registered Jun 25 18:33:50.189557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:33:50.189567 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:33:50.189585 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:33:50.189597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:33:50.189623 systemd-journald[1117]: Collecting audit messages is disabled. Jun 25 18:33:50.189645 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:33:50.189656 systemd[1]: Stopped verity-setup.service. Jun 25 18:33:50.189669 systemd-journald[1117]: Journal started Jun 25 18:33:50.189692 systemd-journald[1117]: Runtime Journal (/run/log/journal/295b72adbf83439aacc08c063674f64e) is 5.9M, max 47.3M, 41.4M free. Jun 25 18:33:49.993281 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:33:50.013618 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:33:50.013980 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:33:50.191611 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:33:50.192304 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:33:50.193312 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:33:50.194305 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:33:50.195250 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:33:50.196544 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:33:50.197632 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:33:50.199648 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:33:50.201043 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:33:50.202460 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:33:50.202630 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:33:50.203967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:33:50.204101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:33:50.205450 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:33:50.205691 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:33:50.206932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:33:50.207066 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:33:50.208529 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:33:50.208720 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:33:50.209993 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:33:50.210122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:33:50.211450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:33:50.212794 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:33:50.214237 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:33:50.226790 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:33:50.242731 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:33:50.244759 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:33:50.245844 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:33:50.245886 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:33:50.247776 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:33:50.249806 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:33:50.251794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:33:50.252848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:33:50.254227 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:33:50.256064 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:33:50.257157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:33:50.260745 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:33:50.261953 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:33:50.264792 systemd-journald[1117]: Time spent on flushing to /var/log/journal/295b72adbf83439aacc08c063674f64e is 32.245ms for 855 entries. Jun 25 18:33:50.264792 systemd-journald[1117]: System Journal (/var/log/journal/295b72adbf83439aacc08c063674f64e) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:33:50.313511 systemd-journald[1117]: Received client request to flush runtime journal. Jun 25 18:33:50.313561 kernel: loop0: detected capacity change from 0 to 59688 Jun 25 18:33:50.313594 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:33:50.313790 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:33:50.265764 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:33:50.267553 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:33:50.270499 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:33:50.275051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:33:50.276473 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:33:50.277873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:33:50.279126 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:33:50.293828 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:33:50.295266 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:33:50.298293 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:33:50.302906 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:33:50.314321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:33:50.320625 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:33:50.327133 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:33:50.331284 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:33:50.332013 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:33:50.344518 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:33:50.347615 kernel: loop1: detected capacity change from 0 to 194096 Jun 25 18:33:50.355683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:33:50.376091 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jun 25 18:33:50.376421 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jun 25 18:33:50.376603 kernel: loop2: detected capacity change from 0 to 113712 Jun 25 18:33:50.380839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:33:50.420621 kernel: loop3: detected capacity change from 0 to 59688 Jun 25 18:33:50.427606 kernel: loop4: detected capacity change from 0 to 194096 Jun 25 18:33:50.440621 kernel: loop5: detected capacity change from 0 to 113712 Jun 25 18:33:50.443895 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:33:50.444276 (sd-merge)[1182]: Merged extensions into '/usr'. Jun 25 18:33:50.448883 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:33:50.448900 systemd[1]: Reloading... Jun 25 18:33:50.498607 zram_generator::config[1207]: No configuration found. Jun 25 18:33:50.542385 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:33:50.595036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:33:50.633278 systemd[1]: Reloading finished in 184 ms. Jun 25 18:33:50.660374 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:33:50.661559 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:33:50.672883 systemd[1]: Starting ensure-sysext.service... Jun 25 18:33:50.674445 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:33:50.680794 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:33:50.680809 systemd[1]: Reloading... Jun 25 18:33:50.690470 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:33:50.690757 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:33:50.691364 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:33:50.691560 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jun 25 18:33:50.691618 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jun 25 18:33:50.694076 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:33:50.694175 systemd-tmpfiles[1241]: Skipping /boot Jun 25 18:33:50.700666 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:33:50.700782 systemd-tmpfiles[1241]: Skipping /boot Jun 25 18:33:50.725600 zram_generator::config[1266]: No configuration found. Jun 25 18:33:50.811890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:33:50.850248 systemd[1]: Reloading finished in 169 ms. Jun 25 18:33:50.867717 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:33:50.882031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:33:50.888947 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:33:50.891262 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:33:50.893499 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:33:50.896857 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:33:50.899851 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:33:50.902722 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:33:50.905370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:33:50.908914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:33:50.910989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:33:50.914260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:33:50.915328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:33:50.918823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:33:50.919020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:33:50.920947 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:33:50.922382 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:33:50.923605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:33:50.925114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:33:50.925237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:33:50.929695 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:33:50.931155 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:33:50.932828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:33:50.932980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:33:50.939786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:33:50.949904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:33:50.952867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:33:50.957936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:33:50.959737 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:33:50.960781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:33:50.961755 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Jun 25 18:33:50.964842 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:33:50.967695 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:33:50.969022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:33:50.969180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:33:50.970480 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:33:50.970637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:33:50.971881 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:33:50.973200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:33:50.973326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:33:50.974842 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:33:50.974956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:33:50.978063 systemd[1]: Finished ensure-sysext.service. Jun 25 18:33:50.982990 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:33:50.992641 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:33:51.006781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:33:51.007516 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:33:51.007610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:33:51.009889 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:33:51.010665 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:33:51.016354 augenrules[1368]: No rules Jun 25 18:33:51.017230 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:33:51.033699 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 18:33:51.036608 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1358) Jun 25 18:33:51.053077 systemd-resolved[1307]: Positive Trust Anchors: Jun 25 18:33:51.055777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1347) Jun 25 18:33:51.056197 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:33:51.056233 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:33:51.068397 systemd-resolved[1307]: Defaulting to hostname 'linux'. Jun 25 18:33:51.074279 systemd-networkd[1363]: lo: Link UP Jun 25 18:33:51.074287 systemd-networkd[1363]: lo: Gained carrier Jun 25 18:33:51.074996 systemd-networkd[1363]: Enumeration completed Jun 25 18:33:51.075093 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:33:51.075491 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:33:51.075495 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:33:51.076057 systemd-networkd[1363]: eth0: Link UP Jun 25 18:33:51.076062 systemd-networkd[1363]: eth0: Gained carrier Jun 25 18:33:51.076075 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:33:51.076214 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:33:51.077197 systemd[1]: Reached target network.target - Network. Jun 25 18:33:51.079722 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:33:51.086807 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:33:51.093656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:33:51.095653 systemd-networkd[1363]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:33:51.095776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:33:51.109487 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:33:51.110687 systemd-timesyncd[1369]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:33:51.110803 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:33:51.111013 systemd-timesyncd[1369]: Initial clock synchronization to Tue 2024-06-25 18:33:50.808189 UTC. Jun 25 18:33:51.111498 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:33:51.121361 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:33:51.145816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:33:51.152613 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:33:51.156669 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:33:51.171109 lvm[1395]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:33:51.186564 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:33:51.204458 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:33:51.206057 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:33:51.207007 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:33:51.207910 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:33:51.208964 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:33:51.210000 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:33:51.211012 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:33:51.211911 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:33:51.213009 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:33:51.213042 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:33:51.213810 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:33:51.215354 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:33:51.217360 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:33:51.229543 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:33:51.231375 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:33:51.232669 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:33:51.233488 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:33:51.234224 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:33:51.234925 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:33:51.234956 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:33:51.235809 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:33:51.237491 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:33:51.238645 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:33:51.240481 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:33:51.243847 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:33:51.247730 jq[1405]: false Jun 25 18:33:51.247016 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:33:51.247987 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:33:51.249593 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:33:51.251671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:33:51.253793 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:33:51.259620 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:33:51.261096 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:33:51.261464 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:33:51.267176 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:33:51.268991 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:33:51.269472 dbus-daemon[1404]: [system] SELinux support is enabled Jun 25 18:33:51.270404 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:33:51.275675 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:33:51.278428 jq[1417]: true Jun 25 18:33:51.279908 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:33:51.280055 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:33:51.281006 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:33:51.281150 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:33:51.283014 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:33:51.283650 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:33:51.285541 extend-filesystems[1406]: Found loop3 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found loop4 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found loop5 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found vda Jun 25 18:33:51.289691 extend-filesystems[1406]: Found vda1 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found vda2 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found vda3 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found usr Jun 25 18:33:51.289691 extend-filesystems[1406]: Found vda4 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found vda6 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found vda7 Jun 25 18:33:51.289691 extend-filesystems[1406]: Found vda9 Jun 25 18:33:51.289691 extend-filesystems[1406]: Checking size of /dev/vda9 Jun 25 18:33:51.293014 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:33:51.293057 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:33:51.304268 jq[1424]: true Jun 25 18:33:51.295427 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:33:51.295447 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:33:51.309824 (ntainerd)[1425]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:33:51.318284 extend-filesystems[1406]: Resized partition /dev/vda9 Jun 25 18:33:51.319218 tar[1423]: linux-arm64/helm Jun 25 18:33:51.323629 extend-filesystems[1442]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:33:51.328900 update_engine[1415]: I0625 18:33:51.328701 1415 main.cc:92] Flatcar Update Engine starting Jun 25 18:33:51.332139 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1345) Jun 25 18:33:51.332184 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:33:51.335894 update_engine[1415]: I0625 18:33:51.335858 1415 update_check_scheduler.cc:74] Next update check in 2m5s Jun 25 18:33:51.336077 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:33:51.344815 systemd-logind[1414]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 18:33:51.350851 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:33:51.353526 systemd-logind[1414]: New seat seat0. Jun 25 18:33:51.357136 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:33:51.359292 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:33:51.372526 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:33:51.372526 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:33:51.372526 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:33:51.377901 extend-filesystems[1406]: Resized filesystem in /dev/vda9 Jun 25 18:33:51.373328 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:33:51.375242 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:33:51.400342 bash[1458]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:33:51.403545 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:33:51.405125 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:33:51.423266 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:33:51.519643 containerd[1425]: time="2024-06-25T18:33:51.519505640Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:33:51.546269 containerd[1425]: time="2024-06-25T18:33:51.546231760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:33:51.546269 containerd[1425]: time="2024-06-25T18:33:51.546269720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.547555160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.547604960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.547787880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.547804680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.547875840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.547918840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.547929800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.547985000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.548154040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.548170600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:33:51.548399 containerd[1425]: time="2024-06-25T18:33:51.548180040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548646 containerd[1425]: time="2024-06-25T18:33:51.548261880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:33:51.548646 containerd[1425]: time="2024-06-25T18:33:51.548274840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:33:51.548646 containerd[1425]: time="2024-06-25T18:33:51.548320360Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:33:51.548646 containerd[1425]: time="2024-06-25T18:33:51.548332520Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:33:51.551736 containerd[1425]: time="2024-06-25T18:33:51.551708960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:33:51.551886 containerd[1425]: time="2024-06-25T18:33:51.551859520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:33:51.551963 containerd[1425]: time="2024-06-25T18:33:51.551948000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:33:51.552117 containerd[1425]: time="2024-06-25T18:33:51.552102640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:33:51.552242 containerd[1425]: time="2024-06-25T18:33:51.552226480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:33:51.552402 containerd[1425]: time="2024-06-25T18:33:51.552386000Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:33:51.552467 containerd[1425]: time="2024-06-25T18:33:51.552454200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:33:51.552770 containerd[1425]: time="2024-06-25T18:33:51.552748560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:33:51.552860 containerd[1425]: time="2024-06-25T18:33:51.552844480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:33:51.552974 containerd[1425]: time="2024-06-25T18:33:51.552958000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:33:51.553125 containerd[1425]: time="2024-06-25T18:33:51.553110680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:33:51.553271 containerd[1425]: time="2024-06-25T18:33:51.553210280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:33:51.553334 containerd[1425]: time="2024-06-25T18:33:51.553322040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:33:51.553386 containerd[1425]: time="2024-06-25T18:33:51.553374080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:33:51.553445 containerd[1425]: time="2024-06-25T18:33:51.553432200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:33:51.553767 containerd[1425]: time="2024-06-25T18:33:51.553530680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:33:51.553767 containerd[1425]: time="2024-06-25T18:33:51.553560680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:33:51.553767 containerd[1425]: time="2024-06-25T18:33:51.553573440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:33:51.553767 containerd[1425]: time="2024-06-25T18:33:51.553598120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:33:51.553767 containerd[1425]: time="2024-06-25T18:33:51.553717720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:33:51.554327 containerd[1425]: time="2024-06-25T18:33:51.554306280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:33:51.554644 containerd[1425]: time="2024-06-25T18:33:51.554445720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.554644 containerd[1425]: time="2024-06-25T18:33:51.554467800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:33:51.554644 containerd[1425]: time="2024-06-25T18:33:51.554489200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:33:51.555015 containerd[1425]: time="2024-06-25T18:33:51.554992880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.555410 containerd[1425]: time="2024-06-25T18:33:51.555144680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.555410 containerd[1425]: time="2024-06-25T18:33:51.555171800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.555410 containerd[1425]: time="2024-06-25T18:33:51.555183640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.555410 containerd[1425]: time="2024-06-25T18:33:51.555196760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.555410 containerd[1425]: time="2024-06-25T18:33:51.555215680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.555410 containerd[1425]: time="2024-06-25T18:33:51.555227520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.555410 containerd[1425]: time="2024-06-25T18:33:51.555239120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.555410 containerd[1425]: time="2024-06-25T18:33:51.555251920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:33:51.556825 containerd[1425]: time="2024-06-25T18:33:51.556788800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.556928 containerd[1425]: time="2024-06-25T18:33:51.556830120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.556928 containerd[1425]: time="2024-06-25T18:33:51.556847880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.556928 containerd[1425]: time="2024-06-25T18:33:51.556864560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.556928 containerd[1425]: time="2024-06-25T18:33:51.556880200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.556928 containerd[1425]: time="2024-06-25T18:33:51.556898880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.556928 containerd[1425]: time="2024-06-25T18:33:51.556915200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.557139 containerd[1425]: time="2024-06-25T18:33:51.556930760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.557393800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.557521720Z" level=info msg="Connect containerd service" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.557564760Z" level=info msg="using legacy CRI server" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.557573320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.557749680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558655160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558712760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558734880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558748720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558780880Z" level=info msg="Start subscribing containerd event" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558876080Z" level=info msg="Start recovering state" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558938280Z" level=info msg="Start event monitor" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558948320Z" level=info msg="Start snapshots syncer" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558960040Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.558968520Z" level=info msg="Start streaming server" Jun 25 18:33:51.559454 containerd[1425]: time="2024-06-25T18:33:51.559190640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:33:51.560501 containerd[1425]: time="2024-06-25T18:33:51.560467520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:33:51.560742 containerd[1425]: time="2024-06-25T18:33:51.560665440Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:33:51.560933 containerd[1425]: time="2024-06-25T18:33:51.560916920Z" level=info msg="containerd successfully booted in 0.042891s" Jun 25 18:33:51.560999 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:33:51.700048 tar[1423]: linux-arm64/LICENSE Jun 25 18:33:51.700048 tar[1423]: linux-arm64/README.md Jun 25 18:33:51.712374 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:33:52.230795 systemd-networkd[1363]: eth0: Gained IPv6LL Jun 25 18:33:52.236600 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:33:52.238461 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:33:52.251819 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:33:52.253913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:33:52.255770 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:33:52.271923 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:33:52.272118 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:33:52.274617 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:33:52.276450 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:33:52.490370 sshd_keygen[1426]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:33:52.509630 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:33:52.521153 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:33:52.525617 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:33:52.525781 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:33:52.528609 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:33:52.540978 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:33:52.546869 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:33:52.548754 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 18:33:52.549772 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:33:52.745492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:33:52.746809 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:33:52.747846 systemd[1]: Startup finished in 550ms (kernel) + 4.892s (initrd) + 3.169s (userspace) = 8.612s. Jun 25 18:33:52.749080 (kubelet)[1517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:33:53.179847 kubelet[1517]: E0625 18:33:53.179813 1517 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:33:53.182670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:33:53.182803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:33:57.827409 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:33:57.828517 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:49132.service - OpenSSH per-connection server daemon (10.0.0.1:49132). Jun 25 18:33:57.885057 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 49132 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:33:57.887008 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:57.895529 systemd-logind[1414]: New session 1 of user core. Jun 25 18:33:57.896612 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:33:57.915843 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:33:57.924642 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:33:57.927145 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:33:57.933073 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:58.007018 systemd[1535]: Queued start job for default target default.target. Jun 25 18:33:58.016470 systemd[1535]: Created slice app.slice - User Application Slice. Jun 25 18:33:58.016499 systemd[1535]: Reached target paths.target - Paths. Jun 25 18:33:58.016511 systemd[1535]: Reached target timers.target - Timers. Jun 25 18:33:58.017746 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:33:58.027190 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:33:58.027256 systemd[1535]: Reached target sockets.target - Sockets. Jun 25 18:33:58.027268 systemd[1535]: Reached target basic.target - Basic System. Jun 25 18:33:58.027300 systemd[1535]: Reached target default.target - Main User Target. Jun 25 18:33:58.027323 systemd[1535]: Startup finished in 89ms. Jun 25 18:33:58.027649 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:33:58.028960 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:33:58.091087 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:49148.service - OpenSSH per-connection server daemon (10.0.0.1:49148). Jun 25 18:33:58.137656 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 49148 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:33:58.138820 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:58.143090 systemd-logind[1414]: New session 2 of user core. Jun 25 18:33:58.152726 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:33:58.203800 sshd[1546]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:58.211937 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:49148.service: Deactivated successfully. Jun 25 18:33:58.213328 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:33:58.215626 systemd-logind[1414]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:33:58.216801 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:49150.service - OpenSSH per-connection server daemon (10.0.0.1:49150). Jun 25 18:33:58.217545 systemd-logind[1414]: Removed session 2. Jun 25 18:33:58.254900 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 49150 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:33:58.256133 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:58.259639 systemd-logind[1414]: New session 3 of user core. Jun 25 18:33:58.265716 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:33:58.312220 sshd[1553]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:58.321946 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:49150.service: Deactivated successfully. Jun 25 18:33:58.323273 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:33:58.325300 systemd-logind[1414]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:33:58.326042 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:49166.service - OpenSSH per-connection server daemon (10.0.0.1:49166). Jun 25 18:33:58.326889 systemd-logind[1414]: Removed session 3. Jun 25 18:33:58.363524 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 49166 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:33:58.364690 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:58.368931 systemd-logind[1414]: New session 4 of user core. Jun 25 18:33:58.378712 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:33:58.428834 sshd[1560]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:58.438893 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:49166.service: Deactivated successfully. Jun 25 18:33:58.440280 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:33:58.442641 systemd-logind[1414]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:33:58.443717 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:49180.service - OpenSSH per-connection server daemon (10.0.0.1:49180). Jun 25 18:33:58.444331 systemd-logind[1414]: Removed session 4. Jun 25 18:33:58.481379 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 49180 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:33:58.482497 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:58.486051 systemd-logind[1414]: New session 5 of user core. Jun 25 18:33:58.502735 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:33:58.569178 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:33:58.569767 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:33:58.585133 sudo[1570]: pam_unix(sudo:session): session closed for user root Jun 25 18:33:58.586987 sshd[1567]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:58.594949 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:49180.service: Deactivated successfully. Jun 25 18:33:58.597857 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:33:58.599058 systemd-logind[1414]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:33:58.610851 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:49184.service - OpenSSH per-connection server daemon (10.0.0.1:49184). Jun 25 18:33:58.611616 systemd-logind[1414]: Removed session 5. Jun 25 18:33:58.645990 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 49184 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:33:58.647184 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:58.650554 systemd-logind[1414]: New session 6 of user core. Jun 25 18:33:58.656700 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:33:58.706020 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:33:58.706266 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:33:58.709263 sudo[1579]: pam_unix(sudo:session): session closed for user root Jun 25 18:33:58.713814 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:33:58.714052 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:33:58.728876 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:33:58.730085 auditctl[1582]: No rules Jun 25 18:33:58.730387 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:33:58.730546 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:33:58.733076 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:33:58.754873 augenrules[1600]: No rules Jun 25 18:33:58.756012 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:33:58.757093 sudo[1578]: pam_unix(sudo:session): session closed for user root Jun 25 18:33:58.758729 sshd[1575]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:58.769096 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:49184.service: Deactivated successfully. Jun 25 18:33:58.770745 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:33:58.772010 systemd-logind[1414]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:33:58.773692 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:49190.service - OpenSSH per-connection server daemon (10.0.0.1:49190). Jun 25 18:33:58.774524 systemd-logind[1414]: Removed session 6. Jun 25 18:33:58.811488 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 49190 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:33:58.812594 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:33:58.816273 systemd-logind[1414]: New session 7 of user core. Jun 25 18:33:58.825714 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:33:58.874939 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:33:58.875162 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:33:58.986832 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:33:58.987000 (dockerd)[1621]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:33:59.216448 dockerd[1621]: time="2024-06-25T18:33:59.216096966Z" level=info msg="Starting up" Jun 25 18:33:59.291478 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4201074763-merged.mount: Deactivated successfully. Jun 25 18:33:59.307411 dockerd[1621]: time="2024-06-25T18:33:59.307360546Z" level=info msg="Loading containers: start." Jun 25 18:33:59.394271 kernel: Initializing XFRM netlink socket Jun 25 18:33:59.457450 systemd-networkd[1363]: docker0: Link UP Jun 25 18:33:59.474953 dockerd[1621]: time="2024-06-25T18:33:59.474897503Z" level=info msg="Loading containers: done." Jun 25 18:33:59.529483 dockerd[1621]: time="2024-06-25T18:33:59.529419244Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:33:59.529663 dockerd[1621]: time="2024-06-25T18:33:59.529645358Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:33:59.529793 dockerd[1621]: time="2024-06-25T18:33:59.529764562Z" level=info msg="Daemon has completed initialization" Jun 25 18:33:59.553011 dockerd[1621]: time="2024-06-25T18:33:59.552892034Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:33:59.553099 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:34:00.040344 containerd[1425]: time="2024-06-25T18:34:00.040296930Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 18:34:00.289385 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1249759286-merged.mount: Deactivated successfully. Jun 25 18:34:00.672683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801575811.mount: Deactivated successfully. Jun 25 18:34:01.636261 containerd[1425]: time="2024-06-25T18:34:01.636207109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:01.637409 containerd[1425]: time="2024-06-25T18:34:01.637373782Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940432" Jun 25 18:34:01.638424 containerd[1425]: time="2024-06-25T18:34:01.638393820Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:01.643545 containerd[1425]: time="2024-06-25T18:34:01.641634300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:01.643545 containerd[1425]: time="2024-06-25T18:34:01.643025868Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 1.602687036s" Jun 25 18:34:01.643545 containerd[1425]: time="2024-06-25T18:34:01.643056872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jun 25 18:34:01.663837 containerd[1425]: time="2024-06-25T18:34:01.663809977Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 18:34:03.419219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:34:03.428819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:34:03.517288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:34:03.520680 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:34:03.560493 kubelet[1832]: E0625 18:34:03.560451 1832 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:34:03.564296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:34:03.564445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:34:04.547461 containerd[1425]: time="2024-06-25T18:34:04.547387175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:04.547945 containerd[1425]: time="2024-06-25T18:34:04.547913062Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881373" Jun 25 18:34:04.548862 containerd[1425]: time="2024-06-25T18:34:04.548837254Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:04.552315 containerd[1425]: time="2024-06-25T18:34:04.552278778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:04.553418 containerd[1425]: time="2024-06-25T18:34:04.553197810Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 2.889246742s" Jun 25 18:34:04.553418 containerd[1425]: time="2024-06-25T18:34:04.553230320Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jun 25 18:34:04.573023 containerd[1425]: time="2024-06-25T18:34:04.572988303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 18:34:05.464364 containerd[1425]: time="2024-06-25T18:34:05.464307067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:05.465524 containerd[1425]: time="2024-06-25T18:34:05.465488188Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155690" Jun 25 18:34:05.466399 containerd[1425]: time="2024-06-25T18:34:05.466378380Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:05.469065 containerd[1425]: time="2024-06-25T18:34:05.469030042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:05.470333 containerd[1425]: time="2024-06-25T18:34:05.470221693Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 897.191936ms" Jun 25 18:34:05.470333 containerd[1425]: time="2024-06-25T18:34:05.470258208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jun 25 18:34:05.488446 containerd[1425]: time="2024-06-25T18:34:05.488416187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 18:34:06.564520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4246972854.mount: Deactivated successfully. Jun 25 18:34:06.756507 containerd[1425]: time="2024-06-25T18:34:06.756444501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:06.757697 containerd[1425]: time="2024-06-25T18:34:06.757665981Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634094" Jun 25 18:34:06.758537 containerd[1425]: time="2024-06-25T18:34:06.758515362Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:06.760540 containerd[1425]: time="2024-06-25T18:34:06.760505497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:06.761385 containerd[1425]: time="2024-06-25T18:34:06.761338852Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 1.272885068s" Jun 25 18:34:06.761385 containerd[1425]: time="2024-06-25T18:34:06.761376631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jun 25 18:34:06.779059 containerd[1425]: time="2024-06-25T18:34:06.779020331Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:34:07.294889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417806816.mount: Deactivated successfully. Jun 25 18:34:07.880762 containerd[1425]: time="2024-06-25T18:34:07.880709007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:07.882299 containerd[1425]: time="2024-06-25T18:34:07.882016705Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jun 25 18:34:07.889629 containerd[1425]: time="2024-06-25T18:34:07.889559885Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:07.904845 containerd[1425]: time="2024-06-25T18:34:07.904800137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:07.905823 containerd[1425]: time="2024-06-25T18:34:07.905787358Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.126731705s" Jun 25 18:34:07.905882 containerd[1425]: time="2024-06-25T18:34:07.905825403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jun 25 18:34:07.925275 containerd[1425]: time="2024-06-25T18:34:07.925236240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:34:08.435712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1160999964.mount: Deactivated successfully. Jun 25 18:34:08.440595 containerd[1425]: time="2024-06-25T18:34:08.440530785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:08.441091 containerd[1425]: time="2024-06-25T18:34:08.441062602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jun 25 18:34:08.441913 containerd[1425]: time="2024-06-25T18:34:08.441886111Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:08.444134 containerd[1425]: time="2024-06-25T18:34:08.444102100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:08.445030 containerd[1425]: time="2024-06-25T18:34:08.445004416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 519.729648ms" Jun 25 18:34:08.445071 containerd[1425]: time="2024-06-25T18:34:08.445036313Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 18:34:08.463752 containerd[1425]: time="2024-06-25T18:34:08.463717512Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 18:34:08.981237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount291254189.mount: Deactivated successfully. Jun 25 18:34:10.874820 containerd[1425]: time="2024-06-25T18:34:10.874772151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:10.875998 containerd[1425]: time="2024-06-25T18:34:10.875734014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jun 25 18:34:10.876673 containerd[1425]: time="2024-06-25T18:34:10.876644214Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:10.879786 containerd[1425]: time="2024-06-25T18:34:10.879751922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:10.879786 containerd[1425]: time="2024-06-25T18:34:10.881468120Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.417714561s" Jun 25 18:34:10.879786 containerd[1425]: time="2024-06-25T18:34:10.881499253Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jun 25 18:34:13.670104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:34:13.682786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:34:13.783035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:34:13.788111 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:34:13.826706 kubelet[2062]: E0625 18:34:13.826632 2062 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:34:13.829244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:34:13.829392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:34:15.462880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:34:15.473889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:34:15.488250 systemd[1]: Reloading requested from client PID 2077 ('systemctl') (unit session-7.scope)... Jun 25 18:34:15.488267 systemd[1]: Reloading... Jun 25 18:34:15.551617 zram_generator::config[2114]: No configuration found. Jun 25 18:34:15.646282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:34:15.701575 systemd[1]: Reloading finished in 213 ms. Jun 25 18:34:15.737917 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:34:15.737989 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:34:15.738227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:34:15.741742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:34:15.836883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:34:15.841548 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:34:15.885370 kubelet[2160]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:34:15.885370 kubelet[2160]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:34:15.885370 kubelet[2160]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:34:15.885719 kubelet[2160]: I0625 18:34:15.885523 2160 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:34:17.143110 kubelet[2160]: I0625 18:34:17.143066 2160 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:34:17.143110 kubelet[2160]: I0625 18:34:17.143097 2160 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:34:17.143522 kubelet[2160]: I0625 18:34:17.143302 2160 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:34:17.176940 kubelet[2160]: E0625 18:34:17.176903 2160 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.177077 kubelet[2160]: I0625 18:34:17.177061 2160 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:34:17.185969 kubelet[2160]: I0625 18:34:17.185932 2160 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:34:17.187080 kubelet[2160]: I0625 18:34:17.187026 2160 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:34:17.187252 kubelet[2160]: I0625 18:34:17.187073 2160 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:34:17.187333 kubelet[2160]: I0625 18:34:17.187322 2160 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:34:17.187333 kubelet[2160]: I0625 18:34:17.187332 2160 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:34:17.187618 kubelet[2160]: I0625 18:34:17.187595 2160 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:34:17.188459 kubelet[2160]: I0625 18:34:17.188437 2160 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:34:17.188459 kubelet[2160]: I0625 18:34:17.188459 2160 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:34:17.188742 kubelet[2160]: I0625 18:34:17.188721 2160 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:34:17.188900 kubelet[2160]: I0625 18:34:17.188887 2160 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:34:17.190341 kubelet[2160]: I0625 18:34:17.189967 2160 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:34:17.190341 kubelet[2160]: W0625 18:34:17.190234 2160 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.190341 kubelet[2160]: E0625 18:34:17.190312 2160 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.190341 kubelet[2160]: I0625 18:34:17.190337 2160 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:34:17.190488 kubelet[2160]: W0625 18:34:17.190316 2160 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.190488 kubelet[2160]: W0625 18:34:17.190440 2160 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:34:17.190488 kubelet[2160]: E0625 18:34:17.190452 2160 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.191383 kubelet[2160]: I0625 18:34:17.191365 2160 server.go:1264] "Started kubelet" Jun 25 18:34:17.192364 kubelet[2160]: I0625 18:34:17.191612 2160 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:34:17.193878 kubelet[2160]: I0625 18:34:17.193813 2160 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:34:17.194772 kubelet[2160]: I0625 18:34:17.194743 2160 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:34:17.195389 kubelet[2160]: I0625 18:34:17.195211 2160 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:34:17.195389 kubelet[2160]: I0625 18:34:17.195262 2160 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:34:17.195529 kubelet[2160]: E0625 18:34:17.195364 2160 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc53094ed8cfac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 18:34:17.191329708 +0000 UTC m=+1.346813483,LastTimestamp:2024-06-25 18:34:17.191329708 +0000 UTC m=+1.346813483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 18:34:17.196790 kubelet[2160]: E0625 18:34:17.196765 2160 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:34:17.196874 kubelet[2160]: I0625 18:34:17.196864 2160 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:34:17.196972 kubelet[2160]: I0625 18:34:17.196954 2160 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:34:17.198271 kubelet[2160]: I0625 18:34:17.198241 2160 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:34:17.198478 kubelet[2160]: E0625 18:34:17.198433 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Jun 25 18:34:17.198821 kubelet[2160]: W0625 18:34:17.198575 2160 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.198821 kubelet[2160]: E0625 18:34:17.198660 2160 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.199361 kubelet[2160]: I0625 18:34:17.199336 2160 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:34:17.199563 kubelet[2160]: E0625 18:34:17.199528 2160 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:34:17.200479 kubelet[2160]: I0625 18:34:17.200457 2160 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:34:17.200479 kubelet[2160]: I0625 18:34:17.200476 2160 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:34:17.212147 kubelet[2160]: I0625 18:34:17.212089 2160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:34:17.213512 kubelet[2160]: I0625 18:34:17.213405 2160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:34:17.213594 kubelet[2160]: I0625 18:34:17.213572 2160 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:34:17.213618 kubelet[2160]: I0625 18:34:17.213602 2160 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:34:17.214320 kubelet[2160]: E0625 18:34:17.213642 2160 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:34:17.214409 kubelet[2160]: W0625 18:34:17.214372 2160 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.214494 kubelet[2160]: E0625 18:34:17.214420 2160 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:17.215936 kubelet[2160]: I0625 18:34:17.215834 2160 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:34:17.215936 kubelet[2160]: I0625 18:34:17.215849 2160 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:34:17.215936 kubelet[2160]: I0625 18:34:17.215864 2160 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:34:17.219402 kubelet[2160]: I0625 18:34:17.219370 2160 policy_none.go:49] "None policy: Start" Jun 25 18:34:17.219998 kubelet[2160]: I0625 18:34:17.219956 2160 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:34:17.220088 kubelet[2160]: I0625 18:34:17.220039 2160 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:34:17.225072 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:34:17.237556 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:34:17.240828 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:34:17.254392 kubelet[2160]: I0625 18:34:17.254349 2160 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:34:17.254643 kubelet[2160]: I0625 18:34:17.254575 2160 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:34:17.255184 kubelet[2160]: I0625 18:34:17.254726 2160 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:34:17.257090 kubelet[2160]: E0625 18:34:17.257061 2160 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:34:17.298672 kubelet[2160]: I0625 18:34:17.298631 2160 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:34:17.299239 kubelet[2160]: E0625 18:34:17.299208 2160 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jun 25 18:34:17.314429 kubelet[2160]: I0625 18:34:17.314379 2160 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:34:17.316111 kubelet[2160]: I0625 18:34:17.315496 2160 topology_manager.go:215] "Topology Admit Handler" podUID="6ea61e1c47c170bc245b808611d96dbe" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:34:17.316598 kubelet[2160]: I0625 18:34:17.316562 2160 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:34:17.321828 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jun 25 18:34:17.344956 systemd[1]: Created slice kubepods-burstable-pod6ea61e1c47c170bc245b808611d96dbe.slice - libcontainer container kubepods-burstable-pod6ea61e1c47c170bc245b808611d96dbe.slice. Jun 25 18:34:17.360276 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jun 25 18:34:17.399464 kubelet[2160]: I0625 18:34:17.399351 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:17.399464 kubelet[2160]: I0625 18:34:17.399395 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:17.399464 kubelet[2160]: I0625 18:34:17.399416 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:17.399464 kubelet[2160]: I0625 18:34:17.399443 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea61e1c47c170bc245b808611d96dbe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ea61e1c47c170bc245b808611d96dbe\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:34:17.399829 kubelet[2160]: I0625 18:34:17.399496 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:17.399829 kubelet[2160]: I0625 18:34:17.399528 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:17.399829 kubelet[2160]: I0625 18:34:17.399555 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:34:17.399829 kubelet[2160]: I0625 18:34:17.399596 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea61e1c47c170bc245b808611d96dbe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea61e1c47c170bc245b808611d96dbe\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:34:17.399829 kubelet[2160]: I0625 18:34:17.399623 2160 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea61e1c47c170bc245b808611d96dbe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea61e1c47c170bc245b808611d96dbe\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:34:17.399937 kubelet[2160]: E0625 18:34:17.399638 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Jun 25 18:34:17.501128 kubelet[2160]: I0625 18:34:17.501101 2160 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:34:17.501460 kubelet[2160]: E0625 18:34:17.501436 2160 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jun 25 18:34:17.643242 kubelet[2160]: E0625 18:34:17.643203 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:17.643888 containerd[1425]: time="2024-06-25T18:34:17.643839530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:17.648138 kubelet[2160]: E0625 18:34:17.648116 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:17.648530 containerd[1425]: time="2024-06-25T18:34:17.648498192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ea61e1c47c170bc245b808611d96dbe,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:17.663196 kubelet[2160]: E0625 18:34:17.662926 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:17.663453 containerd[1425]: time="2024-06-25T18:34:17.663338419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:17.800980 kubelet[2160]: E0625 18:34:17.800930 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Jun 25 18:34:17.903267 kubelet[2160]: I0625 18:34:17.903222 2160 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:34:17.903557 kubelet[2160]: E0625 18:34:17.903535 2160 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jun 25 18:34:18.196095 kubelet[2160]: W0625 18:34:18.196015 2160 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:18.196095 kubelet[2160]: E0625 18:34:18.196089 2160 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:18.458413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227735023.mount: Deactivated successfully. Jun 25 18:34:18.476777 kubelet[2160]: W0625 18:34:18.476743 2160 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:18.476777 kubelet[2160]: E0625 18:34:18.476784 2160 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:18.531481 containerd[1425]: time="2024-06-25T18:34:18.531425783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:34:18.534263 containerd[1425]: time="2024-06-25T18:34:18.534149742Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:34:18.535078 containerd[1425]: time="2024-06-25T18:34:18.535044890Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:34:18.535680 containerd[1425]: time="2024-06-25T18:34:18.535651657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:34:18.539560 kubelet[2160]: W0625 18:34:18.539488 2160 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:18.539560 kubelet[2160]: E0625 18:34:18.539553 2160 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:18.564087 kubelet[2160]: W0625 18:34:18.564020 2160 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:18.564087 kubelet[2160]: E0625 18:34:18.564079 2160 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jun 25 18:34:18.566555 containerd[1425]: time="2024-06-25T18:34:18.566469882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jun 25 18:34:18.584867 containerd[1425]: time="2024-06-25T18:34:18.584819799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:34:18.601842 kubelet[2160]: E0625 18:34:18.601783 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="1.6s" Jun 25 18:34:18.609912 containerd[1425]: time="2024-06-25T18:34:18.609862531Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:34:18.633691 containerd[1425]: time="2024-06-25T18:34:18.633628963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:34:18.634558 containerd[1425]: time="2024-06-25T18:34:18.634458668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 985.810914ms" Jun 25 18:34:18.648380 containerd[1425]: time="2024-06-25T18:34:18.648338637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.004389813s" Jun 25 18:34:18.649060 containerd[1425]: time="2024-06-25T18:34:18.649034340Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 985.609636ms" Jun 25 18:34:18.705446 kubelet[2160]: I0625 18:34:18.705416 2160 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:34:18.706023 kubelet[2160]: E0625 18:34:18.705994 2160 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jun 25 18:34:18.846260 containerd[1425]: time="2024-06-25T18:34:18.846049944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:18.846260 containerd[1425]: time="2024-06-25T18:34:18.846150226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:18.846260 containerd[1425]: time="2024-06-25T18:34:18.846164968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:18.846260 containerd[1425]: time="2024-06-25T18:34:18.846177474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:18.848788 containerd[1425]: time="2024-06-25T18:34:18.848688483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:18.848788 containerd[1425]: time="2024-06-25T18:34:18.848770507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:18.848907 containerd[1425]: time="2024-06-25T18:34:18.848790164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:18.848907 containerd[1425]: time="2024-06-25T18:34:18.848807104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:18.849170 containerd[1425]: time="2024-06-25T18:34:18.849103475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:18.849259 containerd[1425]: time="2024-06-25T18:34:18.849155534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:18.850706 containerd[1425]: time="2024-06-25T18:34:18.850477501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:18.850706 containerd[1425]: time="2024-06-25T18:34:18.850521689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:18.870786 systemd[1]: Started cri-containerd-a4985ed332ceb212c17302676289529ee8be16307d3aeadc91735056e9844775.scope - libcontainer container a4985ed332ceb212c17302676289529ee8be16307d3aeadc91735056e9844775. Jun 25 18:34:18.872297 systemd[1]: Started cri-containerd-d0d6d24ca18a1639639087efdfce0bfea666c4581dc1de52182def486a66f45f.scope - libcontainer container d0d6d24ca18a1639639087efdfce0bfea666c4581dc1de52182def486a66f45f. Jun 25 18:34:18.875846 systemd[1]: Started cri-containerd-3321dc24a1b357d79e57e8e7c436d064b3e4f240c672ace142a18b4561105271.scope - libcontainer container 3321dc24a1b357d79e57e8e7c436d064b3e4f240c672ace142a18b4561105271. Jun 25 18:34:18.904278 containerd[1425]: time="2024-06-25T18:34:18.903848064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ea61e1c47c170bc245b808611d96dbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4985ed332ceb212c17302676289529ee8be16307d3aeadc91735056e9844775\"" Jun 25 18:34:18.906233 kubelet[2160]: E0625 18:34:18.905694 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:18.910221 containerd[1425]: time="2024-06-25T18:34:18.909381841Z" level=info msg="CreateContainer within sandbox \"a4985ed332ceb212c17302676289529ee8be16307d3aeadc91735056e9844775\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:34:18.916334 containerd[1425]: time="2024-06-25T18:34:18.916286128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0d6d24ca18a1639639087efdfce0bfea666c4581dc1de52182def486a66f45f\"" Jun 25 18:34:18.916891 containerd[1425]: time="2024-06-25T18:34:18.916860733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3321dc24a1b357d79e57e8e7c436d064b3e4f240c672ace142a18b4561105271\"" Jun 25 18:34:18.917475 kubelet[2160]: E0625 18:34:18.917450 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:18.918273 kubelet[2160]: E0625 18:34:18.918256 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:18.920309 containerd[1425]: time="2024-06-25T18:34:18.920264413Z" level=info msg="CreateContainer within sandbox \"3321dc24a1b357d79e57e8e7c436d064b3e4f240c672ace142a18b4561105271\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:34:18.921793 containerd[1425]: time="2024-06-25T18:34:18.921620859Z" level=info msg="CreateContainer within sandbox \"d0d6d24ca18a1639639087efdfce0bfea666c4581dc1de52182def486a66f45f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:34:18.923392 containerd[1425]: time="2024-06-25T18:34:18.923260852Z" level=info msg="CreateContainer within sandbox \"a4985ed332ceb212c17302676289529ee8be16307d3aeadc91735056e9844775\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c66cad5bb61c41468f4674edd69a67f02a42f2f17356daeeebc895a0a86992a\"" Jun 25 18:34:18.923881 containerd[1425]: time="2024-06-25T18:34:18.923838733Z" level=info msg="StartContainer for \"2c66cad5bb61c41468f4674edd69a67f02a42f2f17356daeeebc895a0a86992a\"" Jun 25 18:34:18.934170 containerd[1425]: time="2024-06-25T18:34:18.934127163Z" level=info msg="CreateContainer within sandbox \"d0d6d24ca18a1639639087efdfce0bfea666c4581dc1de52182def486a66f45f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1ecd6c66f756a3b6ff4d06837cabfbf7aaac88db9c591d9ec3bd27912c6652ed\"" Jun 25 18:34:18.937597 containerd[1425]: time="2024-06-25T18:34:18.934697093Z" level=info msg="StartContainer for \"1ecd6c66f756a3b6ff4d06837cabfbf7aaac88db9c591d9ec3bd27912c6652ed\"" Jun 25 18:34:18.940330 containerd[1425]: time="2024-06-25T18:34:18.940287364Z" level=info msg="CreateContainer within sandbox \"3321dc24a1b357d79e57e8e7c436d064b3e4f240c672ace142a18b4561105271\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de6e18351fed18a02b3bf1883b34fc54a2cefebdee540a9d82d8a6247730daee\"" Jun 25 18:34:18.940784 containerd[1425]: time="2024-06-25T18:34:18.940756572Z" level=info msg="StartContainer for \"de6e18351fed18a02b3bf1883b34fc54a2cefebdee540a9d82d8a6247730daee\"" Jun 25 18:34:18.952760 systemd[1]: Started cri-containerd-2c66cad5bb61c41468f4674edd69a67f02a42f2f17356daeeebc895a0a86992a.scope - libcontainer container 2c66cad5bb61c41468f4674edd69a67f02a42f2f17356daeeebc895a0a86992a. Jun 25 18:34:18.963848 systemd[1]: Started cri-containerd-1ecd6c66f756a3b6ff4d06837cabfbf7aaac88db9c591d9ec3bd27912c6652ed.scope - libcontainer container 1ecd6c66f756a3b6ff4d06837cabfbf7aaac88db9c591d9ec3bd27912c6652ed. Jun 25 18:34:18.967530 systemd[1]: Started cri-containerd-de6e18351fed18a02b3bf1883b34fc54a2cefebdee540a9d82d8a6247730daee.scope - libcontainer container de6e18351fed18a02b3bf1883b34fc54a2cefebdee540a9d82d8a6247730daee. Jun 25 18:34:19.006373 containerd[1425]: time="2024-06-25T18:34:19.003620547Z" level=info msg="StartContainer for \"2c66cad5bb61c41468f4674edd69a67f02a42f2f17356daeeebc895a0a86992a\" returns successfully" Jun 25 18:34:19.006373 containerd[1425]: time="2024-06-25T18:34:19.003626701Z" level=info msg="StartContainer for \"1ecd6c66f756a3b6ff4d06837cabfbf7aaac88db9c591d9ec3bd27912c6652ed\" returns successfully" Jun 25 18:34:19.026624 containerd[1425]: time="2024-06-25T18:34:19.026124931Z" level=info msg="StartContainer for \"de6e18351fed18a02b3bf1883b34fc54a2cefebdee540a9d82d8a6247730daee\" returns successfully" Jun 25 18:34:19.223965 kubelet[2160]: E0625 18:34:19.223929 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:19.225466 kubelet[2160]: E0625 18:34:19.225440 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:19.225976 kubelet[2160]: E0625 18:34:19.225958 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:20.234377 kubelet[2160]: E0625 18:34:20.234341 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:20.308097 kubelet[2160]: I0625 18:34:20.308067 2160 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:34:20.743096 kubelet[2160]: E0625 18:34:20.743066 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:20.841808 kubelet[2160]: E0625 18:34:20.841777 2160 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:34:20.995586 kubelet[2160]: I0625 18:34:20.995461 2160 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:34:21.191274 kubelet[2160]: I0625 18:34:21.191213 2160 apiserver.go:52] "Watching apiserver" Jun 25 18:34:21.197694 kubelet[2160]: I0625 18:34:21.197646 2160 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:34:22.939086 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Jun 25 18:34:22.939103 systemd[1]: Reloading... Jun 25 18:34:23.010710 zram_generator::config[2479]: No configuration found. Jun 25 18:34:23.155500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:34:23.222230 systemd[1]: Reloading finished in 282 ms. Jun 25 18:34:23.230468 kubelet[2160]: E0625 18:34:23.230300 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:23.256718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:34:23.256871 kubelet[2160]: I0625 18:34:23.256615 2160 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:34:23.270501 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:34:23.270759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:34:23.270883 systemd[1]: kubelet.service: Consumed 1.727s CPU time, 114.1M memory peak, 0B memory swap peak. Jun 25 18:34:23.280976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:34:23.370423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:34:23.374900 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:34:23.444137 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:34:23.444137 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:34:23.444137 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:34:23.444137 kubelet[2521]: I0625 18:34:23.442947 2521 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:34:23.449162 kubelet[2521]: I0625 18:34:23.449114 2521 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:34:23.449162 kubelet[2521]: I0625 18:34:23.449144 2521 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:34:23.449337 kubelet[2521]: I0625 18:34:23.449321 2521 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:34:23.450661 kubelet[2521]: I0625 18:34:23.450635 2521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:34:23.451778 kubelet[2521]: I0625 18:34:23.451754 2521 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:34:23.457290 kubelet[2521]: I0625 18:34:23.457270 2521 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:34:23.457520 kubelet[2521]: I0625 18:34:23.457482 2521 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:34:23.457733 kubelet[2521]: I0625 18:34:23.457541 2521 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:34:23.457733 kubelet[2521]: I0625 18:34:23.457734 2521 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:34:23.457842 kubelet[2521]: I0625 18:34:23.457744 2521 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:34:23.457842 kubelet[2521]: I0625 18:34:23.457782 2521 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:34:23.457895 kubelet[2521]: I0625 18:34:23.457876 2521 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:34:23.457895 kubelet[2521]: I0625 18:34:23.457886 2521 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:34:23.457933 kubelet[2521]: I0625 18:34:23.457910 2521 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:34:23.457954 kubelet[2521]: I0625 18:34:23.457939 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:34:23.460601 kubelet[2521]: I0625 18:34:23.459974 2521 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:34:23.460601 kubelet[2521]: I0625 18:34:23.460183 2521 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:34:23.460601 kubelet[2521]: I0625 18:34:23.460548 2521 server.go:1264] "Started kubelet" Jun 25 18:34:23.461390 kubelet[2521]: I0625 18:34:23.461236 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:34:23.461558 kubelet[2521]: I0625 18:34:23.461526 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:34:23.461944 kubelet[2521]: I0625 18:34:23.461899 2521 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:34:23.462022 kubelet[2521]: I0625 18:34:23.461983 2521 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:34:23.464823 kubelet[2521]: I0625 18:34:23.464797 2521 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:34:23.469026 kubelet[2521]: E0625 18:34:23.468691 2521 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:34:23.469109 kubelet[2521]: I0625 18:34:23.468793 2521 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:34:23.469287 kubelet[2521]: I0625 18:34:23.468961 2521 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:34:23.469619 kubelet[2521]: I0625 18:34:23.469593 2521 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:34:23.471824 kubelet[2521]: E0625 18:34:23.471788 2521 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:34:23.480518 kubelet[2521]: I0625 18:34:23.479610 2521 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:34:23.480715 kubelet[2521]: I0625 18:34:23.480688 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:34:23.486368 kubelet[2521]: I0625 18:34:23.486342 2521 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:34:23.487621 kubelet[2521]: I0625 18:34:23.487574 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:34:23.489730 kubelet[2521]: I0625 18:34:23.489699 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:34:23.489730 kubelet[2521]: I0625 18:34:23.489734 2521 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:34:23.489821 kubelet[2521]: I0625 18:34:23.489752 2521 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:34:23.489821 kubelet[2521]: E0625 18:34:23.489791 2521 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:34:23.518086 kubelet[2521]: I0625 18:34:23.518058 2521 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:34:23.518507 kubelet[2521]: I0625 18:34:23.518249 2521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:34:23.518507 kubelet[2521]: I0625 18:34:23.518274 2521 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:34:23.518507 kubelet[2521]: I0625 18:34:23.518417 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:34:23.518507 kubelet[2521]: I0625 18:34:23.518426 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:34:23.518507 kubelet[2521]: I0625 18:34:23.518441 2521 policy_none.go:49] "None policy: Start" Jun 25 18:34:23.519155 kubelet[2521]: I0625 18:34:23.519139 2521 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:34:23.519194 kubelet[2521]: I0625 18:34:23.519164 2521 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:34:23.519333 kubelet[2521]: I0625 18:34:23.519319 2521 state_mem.go:75] "Updated machine memory state" Jun 25 18:34:23.522845 kubelet[2521]: I0625 18:34:23.522819 2521 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:34:23.523182 kubelet[2521]: I0625 18:34:23.522987 2521 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:34:23.523977 kubelet[2521]: I0625 18:34:23.523954 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:34:23.572914 kubelet[2521]: I0625 18:34:23.572874 2521 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:34:23.578023 kubelet[2521]: I0625 18:34:23.577992 2521 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 18:34:23.578135 kubelet[2521]: I0625 18:34:23.578068 2521 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:34:23.590133 kubelet[2521]: I0625 18:34:23.590093 2521 topology_manager.go:215] "Topology Admit Handler" podUID="6ea61e1c47c170bc245b808611d96dbe" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:34:23.590211 kubelet[2521]: I0625 18:34:23.590196 2521 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:34:23.590252 kubelet[2521]: I0625 18:34:23.590231 2521 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:34:23.594979 kubelet[2521]: E0625 18:34:23.594945 2521 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:34:23.771619 kubelet[2521]: I0625 18:34:23.771094 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:34:23.771619 kubelet[2521]: I0625 18:34:23.771135 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea61e1c47c170bc245b808611d96dbe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea61e1c47c170bc245b808611d96dbe\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:34:23.771619 kubelet[2521]: I0625 18:34:23.771164 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:23.771619 kubelet[2521]: I0625 18:34:23.771195 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:23.771619 kubelet[2521]: I0625 18:34:23.771230 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:23.771809 kubelet[2521]: I0625 18:34:23.771257 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea61e1c47c170bc245b808611d96dbe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ea61e1c47c170bc245b808611d96dbe\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:34:23.771809 kubelet[2521]: I0625 18:34:23.771285 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:23.771809 kubelet[2521]: I0625 18:34:23.771312 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:23.771809 kubelet[2521]: I0625 18:34:23.771338 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea61e1c47c170bc245b808611d96dbe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea61e1c47c170bc245b808611d96dbe\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:34:23.895118 kubelet[2521]: E0625 18:34:23.895080 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:23.896123 kubelet[2521]: E0625 18:34:23.895640 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:23.896123 kubelet[2521]: E0625 18:34:23.896026 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:23.940424 sudo[2555]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 18:34:23.940740 sudo[2555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 18:34:24.375755 sudo[2555]: pam_unix(sudo:session): session closed for user root Jun 25 18:34:24.458762 kubelet[2521]: I0625 18:34:24.458715 2521 apiserver.go:52] "Watching apiserver" Jun 25 18:34:24.469748 kubelet[2521]: I0625 18:34:24.469713 2521 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:34:24.508342 kubelet[2521]: E0625 18:34:24.508299 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:24.511833 kubelet[2521]: E0625 18:34:24.510143 2521 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 25 18:34:24.511833 kubelet[2521]: E0625 18:34:24.510412 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:24.512243 kubelet[2521]: E0625 18:34:24.512122 2521 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 25 18:34:24.512522 kubelet[2521]: E0625 18:34:24.512489 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:24.527978 kubelet[2521]: I0625 18:34:24.527925 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.527869424 podStartE2EDuration="1.527869424s" podCreationTimestamp="2024-06-25 18:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:34:24.527860983 +0000 UTC m=+1.147521833" watchObservedRunningTime="2024-06-25 18:34:24.527869424 +0000 UTC m=+1.147530274" Jun 25 18:34:24.548916 kubelet[2521]: I0625 18:34:24.548848 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5488154600000001 podStartE2EDuration="1.54881546s" podCreationTimestamp="2024-06-25 18:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:34:24.548656397 +0000 UTC m=+1.168317247" watchObservedRunningTime="2024-06-25 18:34:24.54881546 +0000 UTC m=+1.168476310" Jun 25 18:34:24.549096 kubelet[2521]: I0625 18:34:24.548964 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.54895764 podStartE2EDuration="1.54895764s" podCreationTimestamp="2024-06-25 18:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:34:24.534687252 +0000 UTC m=+1.154348102" watchObservedRunningTime="2024-06-25 18:34:24.54895764 +0000 UTC m=+1.168618490" Jun 25 18:34:25.506919 kubelet[2521]: E0625 18:34:25.506685 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:25.506919 kubelet[2521]: E0625 18:34:25.506827 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:25.507273 kubelet[2521]: E0625 18:34:25.506954 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:25.877623 sudo[1611]: pam_unix(sudo:session): session closed for user root Jun 25 18:34:25.879332 sshd[1608]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:25.881854 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:34:25.882164 systemd[1]: session-7.scope: Consumed 6.908s CPU time, 142.0M memory peak, 0B memory swap peak. Jun 25 18:34:25.882703 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:49190.service: Deactivated successfully. Jun 25 18:34:25.885446 systemd-logind[1414]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:34:25.886328 systemd-logind[1414]: Removed session 7. Jun 25 18:34:29.074988 kubelet[2521]: E0625 18:34:29.074892 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:29.516424 kubelet[2521]: E0625 18:34:29.516381 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:31.447620 kubelet[2521]: E0625 18:34:31.447542 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:31.517495 kubelet[2521]: E0625 18:34:31.517466 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:31.640464 kubelet[2521]: E0625 18:34:31.640390 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:32.519265 kubelet[2521]: E0625 18:34:32.519230 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:33.520226 kubelet[2521]: E0625 18:34:33.520052 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:36.138258 update_engine[1415]: I0625 18:34:36.138190 1415 update_attempter.cc:509] Updating boot flags... Jun 25 18:34:36.156650 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2603) Jun 25 18:34:36.183654 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2605) Jun 25 18:34:38.696608 kubelet[2521]: I0625 18:34:38.696557 2521 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:34:38.696935 containerd[1425]: time="2024-06-25T18:34:38.696865065Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:34:38.697103 kubelet[2521]: I0625 18:34:38.697028 2521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:34:39.031906 kubelet[2521]: I0625 18:34:39.031781 2521 topology_manager.go:215] "Topology Admit Handler" podUID="59865495-df12-47ad-bd04-dc5d4830934f" podNamespace="kube-system" podName="cilium-n8gfc" Jun 25 18:34:39.035154 kubelet[2521]: I0625 18:34:39.034289 2521 topology_manager.go:215] "Topology Admit Handler" podUID="73566322-694f-47ed-bfec-3d1687ed0b65" podNamespace="kube-system" podName="kube-proxy-kh6sb" Jun 25 18:34:39.042955 systemd[1]: Created slice kubepods-burstable-pod59865495_df12_47ad_bd04_dc5d4830934f.slice - libcontainer container kubepods-burstable-pod59865495_df12_47ad_bd04_dc5d4830934f.slice. Jun 25 18:34:39.050257 systemd[1]: Created slice kubepods-besteffort-pod73566322_694f_47ed_bfec_3d1687ed0b65.slice - libcontainer container kubepods-besteffort-pod73566322_694f_47ed_bfec_3d1687ed0b65.slice. Jun 25 18:34:39.084048 kubelet[2521]: I0625 18:34:39.084015 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cni-path\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084233 kubelet[2521]: I0625 18:34:39.084059 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-hostproc\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084233 kubelet[2521]: I0625 18:34:39.084080 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-host-proc-sys-kernel\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084233 kubelet[2521]: I0625 18:34:39.084098 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73566322-694f-47ed-bfec-3d1687ed0b65-xtables-lock\") pod \"kube-proxy-kh6sb\" (UID: \"73566322-694f-47ed-bfec-3d1687ed0b65\") " pod="kube-system/kube-proxy-kh6sb" Jun 25 18:34:39.084233 kubelet[2521]: I0625 18:34:39.084113 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-bpf-maps\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084233 kubelet[2521]: I0625 18:34:39.084127 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-xtables-lock\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084233 kubelet[2521]: I0625 18:34:39.084141 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59865495-df12-47ad-bd04-dc5d4830934f-cilium-config-path\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084440 kubelet[2521]: I0625 18:34:39.084155 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-host-proc-sys-net\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084440 kubelet[2521]: I0625 18:34:39.084170 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73566322-694f-47ed-bfec-3d1687ed0b65-kube-proxy\") pod \"kube-proxy-kh6sb\" (UID: \"73566322-694f-47ed-bfec-3d1687ed0b65\") " pod="kube-system/kube-proxy-kh6sb" Jun 25 18:34:39.084440 kubelet[2521]: I0625 18:34:39.084187 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59865495-df12-47ad-bd04-dc5d4830934f-clustermesh-secrets\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084440 kubelet[2521]: I0625 18:34:39.084202 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cilium-cgroup\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084440 kubelet[2521]: I0625 18:34:39.084217 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cilium-run\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084440 kubelet[2521]: I0625 18:34:39.084231 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-etc-cni-netd\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084639 kubelet[2521]: I0625 18:34:39.084245 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-lib-modules\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084639 kubelet[2521]: I0625 18:34:39.084261 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-hubble-tls\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084639 kubelet[2521]: I0625 18:34:39.084275 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5gjz\" (UniqueName: \"kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-kube-api-access-r5gjz\") pod \"cilium-n8gfc\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " pod="kube-system/cilium-n8gfc" Jun 25 18:34:39.084639 kubelet[2521]: I0625 18:34:39.084290 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgqj9\" (UniqueName: \"kubernetes.io/projected/73566322-694f-47ed-bfec-3d1687ed0b65-kube-api-access-hgqj9\") pod \"kube-proxy-kh6sb\" (UID: \"73566322-694f-47ed-bfec-3d1687ed0b65\") " pod="kube-system/kube-proxy-kh6sb" Jun 25 18:34:39.084639 kubelet[2521]: I0625 18:34:39.084317 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73566322-694f-47ed-bfec-3d1687ed0b65-lib-modules\") pod \"kube-proxy-kh6sb\" (UID: \"73566322-694f-47ed-bfec-3d1687ed0b65\") " pod="kube-system/kube-proxy-kh6sb" Jun 25 18:34:39.194761 kubelet[2521]: E0625 18:34:39.194698 2521 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 18:34:39.194761 kubelet[2521]: E0625 18:34:39.194727 2521 projected.go:200] Error preparing data for projected volume kube-api-access-r5gjz for pod kube-system/cilium-n8gfc: configmap "kube-root-ca.crt" not found Jun 25 18:34:39.194887 kubelet[2521]: E0625 18:34:39.194773 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-kube-api-access-r5gjz podName:59865495-df12-47ad-bd04-dc5d4830934f nodeName:}" failed. No retries permitted until 2024-06-25 18:34:39.694755596 +0000 UTC m=+16.314416446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r5gjz" (UniqueName: "kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-kube-api-access-r5gjz") pod "cilium-n8gfc" (UID: "59865495-df12-47ad-bd04-dc5d4830934f") : configmap "kube-root-ca.crt" not found Jun 25 18:34:39.195021 kubelet[2521]: E0625 18:34:39.194708 2521 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 18:34:39.195021 kubelet[2521]: E0625 18:34:39.194925 2521 projected.go:200] Error preparing data for projected volume kube-api-access-hgqj9 for pod kube-system/kube-proxy-kh6sb: configmap "kube-root-ca.crt" not found Jun 25 18:34:39.195021 kubelet[2521]: E0625 18:34:39.194965 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73566322-694f-47ed-bfec-3d1687ed0b65-kube-api-access-hgqj9 podName:73566322-694f-47ed-bfec-3d1687ed0b65 nodeName:}" failed. No retries permitted until 2024-06-25 18:34:39.694956569 +0000 UTC m=+16.314617419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hgqj9" (UniqueName: "kubernetes.io/projected/73566322-694f-47ed-bfec-3d1687ed0b65-kube-api-access-hgqj9") pod "kube-proxy-kh6sb" (UID: "73566322-694f-47ed-bfec-3d1687ed0b65") : configmap "kube-root-ca.crt" not found Jun 25 18:34:39.828512 kubelet[2521]: I0625 18:34:39.827640 2521 topology_manager.go:215] "Topology Admit Handler" podUID="fbcff9f6-408a-4d43-8e97-2002aa791158" podNamespace="kube-system" podName="cilium-operator-599987898-vx8m8" Jun 25 18:34:39.840473 systemd[1]: Created slice kubepods-besteffort-podfbcff9f6_408a_4d43_8e97_2002aa791158.slice - libcontainer container kubepods-besteffort-podfbcff9f6_408a_4d43_8e97_2002aa791158.slice. Jun 25 18:34:39.891179 kubelet[2521]: I0625 18:34:39.891134 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwlg\" (UniqueName: \"kubernetes.io/projected/fbcff9f6-408a-4d43-8e97-2002aa791158-kube-api-access-hgwlg\") pod \"cilium-operator-599987898-vx8m8\" (UID: \"fbcff9f6-408a-4d43-8e97-2002aa791158\") " pod="kube-system/cilium-operator-599987898-vx8m8" Jun 25 18:34:39.891179 kubelet[2521]: I0625 18:34:39.891183 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbcff9f6-408a-4d43-8e97-2002aa791158-cilium-config-path\") pod \"cilium-operator-599987898-vx8m8\" (UID: \"fbcff9f6-408a-4d43-8e97-2002aa791158\") " pod="kube-system/cilium-operator-599987898-vx8m8" Jun 25 18:34:39.948238 kubelet[2521]: E0625 18:34:39.948208 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:39.948751 containerd[1425]: time="2024-06-25T18:34:39.948718229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n8gfc,Uid:59865495-df12-47ad-bd04-dc5d4830934f,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:39.959332 kubelet[2521]: E0625 18:34:39.958914 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:39.960141 containerd[1425]: time="2024-06-25T18:34:39.959996202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kh6sb,Uid:73566322-694f-47ed-bfec-3d1687ed0b65,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:39.976244 containerd[1425]: time="2024-06-25T18:34:39.976157374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:39.976244 containerd[1425]: time="2024-06-25T18:34:39.976216697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:39.976244 containerd[1425]: time="2024-06-25T18:34:39.976245619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:39.976244 containerd[1425]: time="2024-06-25T18:34:39.976261220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:39.989884 containerd[1425]: time="2024-06-25T18:34:39.989773899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:39.989884 containerd[1425]: time="2024-06-25T18:34:39.989855064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:39.990188 containerd[1425]: time="2024-06-25T18:34:39.990029356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:39.990188 containerd[1425]: time="2024-06-25T18:34:39.990051277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:39.997107 systemd[1]: Started cri-containerd-7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b.scope - libcontainer container 7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b. Jun 25 18:34:40.016810 systemd[1]: Started cri-containerd-3a3956a42714d07470576e212bf3aaa7158f694bd3db4fe7468612136394113e.scope - libcontainer container 3a3956a42714d07470576e212bf3aaa7158f694bd3db4fe7468612136394113e. Jun 25 18:34:40.036102 containerd[1425]: time="2024-06-25T18:34:40.035930516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n8gfc,Uid:59865495-df12-47ad-bd04-dc5d4830934f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\"" Jun 25 18:34:40.036906 kubelet[2521]: E0625 18:34:40.036509 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:40.037872 containerd[1425]: time="2024-06-25T18:34:40.037813272Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 18:34:40.045311 containerd[1425]: time="2024-06-25T18:34:40.045280255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kh6sb,Uid:73566322-694f-47ed-bfec-3d1687ed0b65,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a3956a42714d07470576e212bf3aaa7158f694bd3db4fe7468612136394113e\"" Jun 25 18:34:40.046224 kubelet[2521]: E0625 18:34:40.046183 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:40.049145 containerd[1425]: time="2024-06-25T18:34:40.049112653Z" level=info msg="CreateContainer within sandbox \"3a3956a42714d07470576e212bf3aaa7158f694bd3db4fe7468612136394113e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:34:40.060810 containerd[1425]: time="2024-06-25T18:34:40.060770615Z" level=info msg="CreateContainer within sandbox \"3a3956a42714d07470576e212bf3aaa7158f694bd3db4fe7468612136394113e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2cc4bc5840ea0da7d8bdff3fac6095a2d6f7ff2f317c80872f7ff941c962d44d\"" Jun 25 18:34:40.061532 containerd[1425]: time="2024-06-25T18:34:40.061493260Z" level=info msg="StartContainer for \"2cc4bc5840ea0da7d8bdff3fac6095a2d6f7ff2f317c80872f7ff941c962d44d\"" Jun 25 18:34:40.084762 systemd[1]: Started cri-containerd-2cc4bc5840ea0da7d8bdff3fac6095a2d6f7ff2f317c80872f7ff941c962d44d.scope - libcontainer container 2cc4bc5840ea0da7d8bdff3fac6095a2d6f7ff2f317c80872f7ff941c962d44d. Jun 25 18:34:40.110517 containerd[1425]: time="2024-06-25T18:34:40.110478216Z" level=info msg="StartContainer for \"2cc4bc5840ea0da7d8bdff3fac6095a2d6f7ff2f317c80872f7ff941c962d44d\" returns successfully" Jun 25 18:34:40.144576 kubelet[2521]: E0625 18:34:40.144546 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:40.146762 containerd[1425]: time="2024-06-25T18:34:40.146720462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vx8m8,Uid:fbcff9f6-408a-4d43-8e97-2002aa791158,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:40.168917 containerd[1425]: time="2024-06-25T18:34:40.168804151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:40.168917 containerd[1425]: time="2024-06-25T18:34:40.168873715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:40.168917 containerd[1425]: time="2024-06-25T18:34:40.168898637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:40.168917 containerd[1425]: time="2024-06-25T18:34:40.168915678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:40.191793 systemd[1]: Started cri-containerd-5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a.scope - libcontainer container 5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a. Jun 25 18:34:40.237553 containerd[1425]: time="2024-06-25T18:34:40.236543509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vx8m8,Uid:fbcff9f6-408a-4d43-8e97-2002aa791158,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a\"" Jun 25 18:34:40.238033 kubelet[2521]: E0625 18:34:40.237468 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:40.533000 kubelet[2521]: E0625 18:34:40.532960 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:40.541501 kubelet[2521]: I0625 18:34:40.541450 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kh6sb" podStartSLOduration=1.541435605 podStartE2EDuration="1.541435605s" podCreationTimestamp="2024-06-25 18:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:34:40.541162908 +0000 UTC m=+17.160823758" watchObservedRunningTime="2024-06-25 18:34:40.541435605 +0000 UTC m=+17.161096455" Jun 25 18:34:46.861141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717110304.mount: Deactivated successfully. Jun 25 18:34:48.104299 containerd[1425]: time="2024-06-25T18:34:48.104242461Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:48.106681 containerd[1425]: time="2024-06-25T18:34:48.105291547Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651478" Jun 25 18:34:48.108142 containerd[1425]: time="2024-06-25T18:34:48.108084988Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:48.111000 containerd[1425]: time="2024-06-25T18:34:48.110310485Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.072415128s" Jun 25 18:34:48.111000 containerd[1425]: time="2024-06-25T18:34:48.110345007Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 25 18:34:48.120186 containerd[1425]: time="2024-06-25T18:34:48.120154553Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 18:34:48.122887 containerd[1425]: time="2024-06-25T18:34:48.122855110Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:34:48.135075 containerd[1425]: time="2024-06-25T18:34:48.135032800Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\"" Jun 25 18:34:48.135432 containerd[1425]: time="2024-06-25T18:34:48.135403096Z" level=info msg="StartContainer for \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\"" Jun 25 18:34:48.163802 systemd[1]: Started cri-containerd-cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4.scope - libcontainer container cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4. Jun 25 18:34:48.182291 containerd[1425]: time="2024-06-25T18:34:48.182243093Z" level=info msg="StartContainer for \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\" returns successfully" Jun 25 18:34:48.255871 systemd[1]: cri-containerd-cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4.scope: Deactivated successfully. Jun 25 18:34:48.388572 containerd[1425]: time="2024-06-25T18:34:48.388459458Z" level=info msg="shim disconnected" id=cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4 namespace=k8s.io Jun 25 18:34:48.388572 containerd[1425]: time="2024-06-25T18:34:48.388512220Z" level=warning msg="cleaning up after shim disconnected" id=cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4 namespace=k8s.io Jun 25 18:34:48.388572 containerd[1425]: time="2024-06-25T18:34:48.388520661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:48.555636 kubelet[2521]: E0625 18:34:48.555348 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:48.558847 containerd[1425]: time="2024-06-25T18:34:48.558809384Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:34:48.574340 containerd[1425]: time="2024-06-25T18:34:48.574185893Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\"" Jun 25 18:34:48.577223 containerd[1425]: time="2024-06-25T18:34:48.574749438Z" level=info msg="StartContainer for \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\"" Jun 25 18:34:48.598764 systemd[1]: Started cri-containerd-22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598.scope - libcontainer container 22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598. Jun 25 18:34:48.641048 containerd[1425]: time="2024-06-25T18:34:48.640878593Z" level=info msg="StartContainer for \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\" returns successfully" Jun 25 18:34:48.640999 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:34:48.642024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:34:48.643056 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:34:48.652437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:34:48.652635 systemd[1]: cri-containerd-22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598.scope: Deactivated successfully. Jun 25 18:34:48.703681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:34:48.706926 containerd[1425]: time="2024-06-25T18:34:48.706870102Z" level=info msg="shim disconnected" id=22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598 namespace=k8s.io Jun 25 18:34:48.707035 containerd[1425]: time="2024-06-25T18:34:48.706926584Z" level=warning msg="cleaning up after shim disconnected" id=22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598 namespace=k8s.io Jun 25 18:34:48.707035 containerd[1425]: time="2024-06-25T18:34:48.706955945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:48.716425 containerd[1425]: time="2024-06-25T18:34:48.716359314Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:34:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:34:49.134416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4-rootfs.mount: Deactivated successfully. Jun 25 18:34:49.325677 containerd[1425]: time="2024-06-25T18:34:49.325571445Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:49.326213 containerd[1425]: time="2024-06-25T18:34:49.326178990Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138302" Jun 25 18:34:49.326990 containerd[1425]: time="2024-06-25T18:34:49.326936822Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:49.328453 containerd[1425]: time="2024-06-25T18:34:49.328417524Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.20822605s" Jun 25 18:34:49.328669 containerd[1425]: time="2024-06-25T18:34:49.328546009Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 25 18:34:49.330727 containerd[1425]: time="2024-06-25T18:34:49.330700259Z" level=info msg="CreateContainer within sandbox \"5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 18:34:49.340319 containerd[1425]: time="2024-06-25T18:34:49.340215457Z" level=info msg="CreateContainer within sandbox \"5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\"" Jun 25 18:34:49.341631 containerd[1425]: time="2024-06-25T18:34:49.340887805Z" level=info msg="StartContainer for \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\"" Jun 25 18:34:49.371749 systemd[1]: Started cri-containerd-6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e.scope - libcontainer container 6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e. Jun 25 18:34:49.390260 containerd[1425]: time="2024-06-25T18:34:49.390124021Z" level=info msg="StartContainer for \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\" returns successfully" Jun 25 18:34:49.560065 kubelet[2521]: E0625 18:34:49.559535 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:49.563744 kubelet[2521]: E0625 18:34:49.563472 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:49.567112 containerd[1425]: time="2024-06-25T18:34:49.567065811Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:34:49.578249 kubelet[2521]: I0625 18:34:49.578141 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-vx8m8" podStartSLOduration=1.488376006 podStartE2EDuration="10.578126433s" podCreationTimestamp="2024-06-25 18:34:39 +0000 UTC" firstStartedPulling="2024-06-25 18:34:40.239479771 +0000 UTC m=+16.859140581" lastFinishedPulling="2024-06-25 18:34:49.329230158 +0000 UTC m=+25.948891008" observedRunningTime="2024-06-25 18:34:49.577979707 +0000 UTC m=+26.197640557" watchObservedRunningTime="2024-06-25 18:34:49.578126433 +0000 UTC m=+26.197787243" Jun 25 18:34:49.584983 containerd[1425]: time="2024-06-25T18:34:49.584882435Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\"" Jun 25 18:34:49.585336 containerd[1425]: time="2024-06-25T18:34:49.585282172Z" level=info msg="StartContainer for \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\"" Jun 25 18:34:49.622754 systemd[1]: Started cri-containerd-b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643.scope - libcontainer container b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643. Jun 25 18:34:49.671117 systemd[1]: cri-containerd-b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643.scope: Deactivated successfully. Jun 25 18:34:49.684373 containerd[1425]: time="2024-06-25T18:34:49.684334749Z" level=info msg="StartContainer for \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\" returns successfully" Jun 25 18:34:49.723556 containerd[1425]: time="2024-06-25T18:34:49.723471984Z" level=info msg="shim disconnected" id=b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643 namespace=k8s.io Jun 25 18:34:49.723556 containerd[1425]: time="2024-06-25T18:34:49.723547387Z" level=warning msg="cleaning up after shim disconnected" id=b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643 namespace=k8s.io Jun 25 18:34:49.723556 containerd[1425]: time="2024-06-25T18:34:49.723556027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:50.566899 kubelet[2521]: E0625 18:34:50.566601 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:50.566899 kubelet[2521]: E0625 18:34:50.566607 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:50.569852 containerd[1425]: time="2024-06-25T18:34:50.569467446Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:34:50.584768 containerd[1425]: time="2024-06-25T18:34:50.584727858Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\"" Jun 25 18:34:50.585899 containerd[1425]: time="2024-06-25T18:34:50.585204278Z" level=info msg="StartContainer for \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\"" Jun 25 18:34:50.609842 systemd[1]: Started cri-containerd-2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0.scope - libcontainer container 2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0. Jun 25 18:34:50.628317 systemd[1]: cri-containerd-2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0.scope: Deactivated successfully. Jun 25 18:34:50.630448 containerd[1425]: time="2024-06-25T18:34:50.630415693Z" level=info msg="StartContainer for \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\" returns successfully" Jun 25 18:34:50.644749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0-rootfs.mount: Deactivated successfully. Jun 25 18:34:50.648973 containerd[1425]: time="2024-06-25T18:34:50.648767630Z" level=info msg="shim disconnected" id=2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0 namespace=k8s.io Jun 25 18:34:50.648973 containerd[1425]: time="2024-06-25T18:34:50.648854794Z" level=warning msg="cleaning up after shim disconnected" id=2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0 namespace=k8s.io Jun 25 18:34:50.649188 containerd[1425]: time="2024-06-25T18:34:50.648863434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:51.570058 kubelet[2521]: E0625 18:34:51.569944 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:51.574017 containerd[1425]: time="2024-06-25T18:34:51.573953286Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:34:51.593863 containerd[1425]: time="2024-06-25T18:34:51.593815134Z" level=info msg="CreateContainer within sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\"" Jun 25 18:34:51.594510 containerd[1425]: time="2024-06-25T18:34:51.594476599Z" level=info msg="StartContainer for \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\"" Jun 25 18:34:51.628813 systemd[1]: Started cri-containerd-9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9.scope - libcontainer container 9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9. Jun 25 18:34:51.662980 containerd[1425]: time="2024-06-25T18:34:51.662922845Z" level=info msg="StartContainer for \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\" returns successfully" Jun 25 18:34:51.831858 kubelet[2521]: I0625 18:34:51.831758 2521 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:34:51.849154 kubelet[2521]: I0625 18:34:51.849108 2521 topology_manager.go:215] "Topology Admit Handler" podUID="387576f5-acb8-4cc3-87e1-03e2f498be75" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pjz2v" Jun 25 18:34:51.851928 kubelet[2521]: I0625 18:34:51.851885 2521 topology_manager.go:215] "Topology Admit Handler" podUID="274fcd93-2fff-4e6d-a14c-6a1559353bbc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kzrt7" Jun 25 18:34:51.861699 systemd[1]: Created slice kubepods-burstable-pod387576f5_acb8_4cc3_87e1_03e2f498be75.slice - libcontainer container kubepods-burstable-pod387576f5_acb8_4cc3_87e1_03e2f498be75.slice. Jun 25 18:34:51.869742 kubelet[2521]: I0625 18:34:51.869649 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/387576f5-acb8-4cc3-87e1-03e2f498be75-config-volume\") pod \"coredns-7db6d8ff4d-pjz2v\" (UID: \"387576f5-acb8-4cc3-87e1-03e2f498be75\") " pod="kube-system/coredns-7db6d8ff4d-pjz2v" Jun 25 18:34:51.869742 kubelet[2521]: I0625 18:34:51.869686 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc5k2\" (UniqueName: \"kubernetes.io/projected/387576f5-acb8-4cc3-87e1-03e2f498be75-kube-api-access-cc5k2\") pod \"coredns-7db6d8ff4d-pjz2v\" (UID: \"387576f5-acb8-4cc3-87e1-03e2f498be75\") " pod="kube-system/coredns-7db6d8ff4d-pjz2v" Jun 25 18:34:51.869742 kubelet[2521]: I0625 18:34:51.869705 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/274fcd93-2fff-4e6d-a14c-6a1559353bbc-config-volume\") pod \"coredns-7db6d8ff4d-kzrt7\" (UID: \"274fcd93-2fff-4e6d-a14c-6a1559353bbc\") " pod="kube-system/coredns-7db6d8ff4d-kzrt7" Jun 25 18:34:51.869742 kubelet[2521]: I0625 18:34:51.869726 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4st6t\" (UniqueName: \"kubernetes.io/projected/274fcd93-2fff-4e6d-a14c-6a1559353bbc-kube-api-access-4st6t\") pod \"coredns-7db6d8ff4d-kzrt7\" (UID: \"274fcd93-2fff-4e6d-a14c-6a1559353bbc\") " pod="kube-system/coredns-7db6d8ff4d-kzrt7" Jun 25 18:34:51.875997 systemd[1]: Created slice kubepods-burstable-pod274fcd93_2fff_4e6d_a14c_6a1559353bbc.slice - libcontainer container kubepods-burstable-pod274fcd93_2fff_4e6d_a14c_6a1559353bbc.slice. Jun 25 18:34:52.168156 kubelet[2521]: E0625 18:34:52.168058 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:52.169483 containerd[1425]: time="2024-06-25T18:34:52.169449629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjz2v,Uid:387576f5-acb8-4cc3-87e1-03e2f498be75,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:52.179042 kubelet[2521]: E0625 18:34:52.179004 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:52.179964 containerd[1425]: time="2024-06-25T18:34:52.179628608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kzrt7,Uid:274fcd93-2fff-4e6d-a14c-6a1559353bbc,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:52.574852 kubelet[2521]: E0625 18:34:52.574821 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:52.600544 kubelet[2521]: I0625 18:34:52.599630 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n8gfc" podStartSLOduration=5.51698783 podStartE2EDuration="13.599615092s" podCreationTimestamp="2024-06-25 18:34:39 +0000 UTC" firstStartedPulling="2024-06-25 18:34:40.037376885 +0000 UTC m=+16.657037735" lastFinishedPulling="2024-06-25 18:34:48.120004147 +0000 UTC m=+24.739664997" observedRunningTime="2024-06-25 18:34:52.596363611 +0000 UTC m=+29.216024461" watchObservedRunningTime="2024-06-25 18:34:52.599615092 +0000 UTC m=+29.219275942" Jun 25 18:34:53.060862 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:58852.service - OpenSSH per-connection server daemon (10.0.0.1:58852). Jun 25 18:34:53.101658 sshd[3368]: Accepted publickey for core from 10.0.0.1 port 58852 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:34:53.103065 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:34:53.106413 systemd-logind[1414]: New session 8 of user core. Jun 25 18:34:53.118720 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:34:53.231846 sshd[3368]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:53.235042 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:58852.service: Deactivated successfully. Jun 25 18:34:53.236689 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:34:53.237268 systemd-logind[1414]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:34:53.238115 systemd-logind[1414]: Removed session 8. Jun 25 18:34:53.576451 kubelet[2521]: E0625 18:34:53.576422 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:53.866088 systemd-networkd[1363]: cilium_host: Link UP Jun 25 18:34:53.866733 systemd-networkd[1363]: cilium_net: Link UP Jun 25 18:34:53.867054 systemd-networkd[1363]: cilium_net: Gained carrier Jun 25 18:34:53.867218 systemd-networkd[1363]: cilium_host: Gained carrier Jun 25 18:34:53.944346 systemd-networkd[1363]: cilium_vxlan: Link UP Jun 25 18:34:53.944357 systemd-networkd[1363]: cilium_vxlan: Gained carrier Jun 25 18:34:54.174728 systemd-networkd[1363]: cilium_net: Gained IPv6LL Jun 25 18:34:54.206606 kernel: NET: Registered PF_ALG protocol family Jun 25 18:34:54.578668 kubelet[2521]: E0625 18:34:54.578349 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:54.760741 systemd-networkd[1363]: cilium_host: Gained IPv6LL Jun 25 18:34:54.770308 systemd-networkd[1363]: lxc_health: Link UP Jun 25 18:34:54.772337 systemd-networkd[1363]: lxc_health: Gained carrier Jun 25 18:34:55.304975 systemd-networkd[1363]: lxcb4854ecc97fd: Link UP Jun 25 18:34:55.322619 kernel: eth0: renamed from tmpc01f0 Jun 25 18:34:55.327701 kernel: eth0: renamed from tmpe1316 Jun 25 18:34:55.331817 systemd-networkd[1363]: lxc514d72196b85: Link UP Jun 25 18:34:55.335449 systemd-networkd[1363]: lxc514d72196b85: Gained carrier Jun 25 18:34:55.336273 systemd-networkd[1363]: lxcb4854ecc97fd: Gained carrier Jun 25 18:34:55.336412 systemd-networkd[1363]: cilium_vxlan: Gained IPv6LL Jun 25 18:34:55.950780 kubelet[2521]: E0625 18:34:55.950745 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:56.295776 systemd-networkd[1363]: lxc_health: Gained IPv6LL Jun 25 18:34:56.422816 systemd-networkd[1363]: lxcb4854ecc97fd: Gained IPv6LL Jun 25 18:34:56.423315 systemd-networkd[1363]: lxc514d72196b85: Gained IPv6LL Jun 25 18:34:56.580998 kubelet[2521]: E0625 18:34:56.580646 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:57.582632 kubelet[2521]: E0625 18:34:57.582571 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:58.250249 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:58854.service - OpenSSH per-connection server daemon (10.0.0.1:58854). Jun 25 18:34:58.305999 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 58854 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:34:58.306634 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:34:58.310933 systemd-logind[1414]: New session 9 of user core. Jun 25 18:34:58.323774 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:34:58.448291 sshd[3766]: pam_unix(sshd:session): session closed for user core Jun 25 18:34:58.451375 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:58854.service: Deactivated successfully. Jun 25 18:34:58.453248 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:34:58.453968 systemd-logind[1414]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:34:58.454934 systemd-logind[1414]: Removed session 9. Jun 25 18:34:58.863027 containerd[1425]: time="2024-06-25T18:34:58.862927736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:58.863027 containerd[1425]: time="2024-06-25T18:34:58.862986698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:58.863535 containerd[1425]: time="2024-06-25T18:34:58.863006858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:58.863535 containerd[1425]: time="2024-06-25T18:34:58.863017099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:58.863806 containerd[1425]: time="2024-06-25T18:34:58.863719360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:58.863806 containerd[1425]: time="2024-06-25T18:34:58.863760641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:58.863806 containerd[1425]: time="2024-06-25T18:34:58.863775642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:58.863806 containerd[1425]: time="2024-06-25T18:34:58.863786482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:58.892789 systemd[1]: Started cri-containerd-c01f04c3f705f2d91d74fdd41a614fd2a22c92d34051b9162ab6444d39ce3152.scope - libcontainer container c01f04c3f705f2d91d74fdd41a614fd2a22c92d34051b9162ab6444d39ce3152. Jun 25 18:34:58.894415 systemd[1]: Started cri-containerd-e1316b1a1e1276feab48841869c961d5ec0c8255abef4f9aec5763e427f46f68.scope - libcontainer container e1316b1a1e1276feab48841869c961d5ec0c8255abef4f9aec5763e427f46f68. Jun 25 18:34:58.905430 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:34:58.908721 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:34:58.929860 containerd[1425]: time="2024-06-25T18:34:58.929761532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjz2v,Uid:387576f5-acb8-4cc3-87e1-03e2f498be75,Namespace:kube-system,Attempt:0,} returns sandbox id \"c01f04c3f705f2d91d74fdd41a614fd2a22c92d34051b9162ab6444d39ce3152\"" Jun 25 18:34:58.931026 kubelet[2521]: E0625 18:34:58.930672 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:58.933543 containerd[1425]: time="2024-06-25T18:34:58.932710102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kzrt7,Uid:274fcd93-2fff-4e6d-a14c-6a1559353bbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1316b1a1e1276feab48841869c961d5ec0c8255abef4f9aec5763e427f46f68\"" Jun 25 18:34:58.934068 kubelet[2521]: E0625 18:34:58.934047 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:58.936068 containerd[1425]: time="2024-06-25T18:34:58.935898879Z" level=info msg="CreateContainer within sandbox \"c01f04c3f705f2d91d74fdd41a614fd2a22c92d34051b9162ab6444d39ce3152\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:34:58.937382 containerd[1425]: time="2024-06-25T18:34:58.937113756Z" level=info msg="CreateContainer within sandbox \"e1316b1a1e1276feab48841869c961d5ec0c8255abef4f9aec5763e427f46f68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:34:58.953776 containerd[1425]: time="2024-06-25T18:34:58.953726102Z" level=info msg="CreateContainer within sandbox \"e1316b1a1e1276feab48841869c961d5ec0c8255abef4f9aec5763e427f46f68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ebb96952bd34b2575ba26fe475ec05193ceac47de8703851be8f0574a7c136d\"" Jun 25 18:34:58.954845 containerd[1425]: time="2024-06-25T18:34:58.954764013Z" level=info msg="CreateContainer within sandbox \"c01f04c3f705f2d91d74fdd41a614fd2a22c92d34051b9162ab6444d39ce3152\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01e11461440a680a398776839373f356717e7a62392a4eb074353116523122ce\"" Jun 25 18:34:58.956077 containerd[1425]: time="2024-06-25T18:34:58.955231988Z" level=info msg="StartContainer for \"01e11461440a680a398776839373f356717e7a62392a4eb074353116523122ce\"" Jun 25 18:34:58.956077 containerd[1425]: time="2024-06-25T18:34:58.955653200Z" level=info msg="StartContainer for \"0ebb96952bd34b2575ba26fe475ec05193ceac47de8703851be8f0574a7c136d\"" Jun 25 18:34:58.978750 systemd[1]: Started cri-containerd-01e11461440a680a398776839373f356717e7a62392a4eb074353116523122ce.scope - libcontainer container 01e11461440a680a398776839373f356717e7a62392a4eb074353116523122ce. Jun 25 18:34:58.981101 systemd[1]: Started cri-containerd-0ebb96952bd34b2575ba26fe475ec05193ceac47de8703851be8f0574a7c136d.scope - libcontainer container 0ebb96952bd34b2575ba26fe475ec05193ceac47de8703851be8f0574a7c136d. Jun 25 18:34:59.003514 containerd[1425]: time="2024-06-25T18:34:59.003472575Z" level=info msg="StartContainer for \"01e11461440a680a398776839373f356717e7a62392a4eb074353116523122ce\" returns successfully" Jun 25 18:34:59.018090 containerd[1425]: time="2024-06-25T18:34:59.018033726Z" level=info msg="StartContainer for \"0ebb96952bd34b2575ba26fe475ec05193ceac47de8703851be8f0574a7c136d\" returns successfully" Jun 25 18:34:59.588987 kubelet[2521]: E0625 18:34:59.588804 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:59.590070 kubelet[2521]: E0625 18:34:59.590011 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:34:59.596609 kubelet[2521]: I0625 18:34:59.596487 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pjz2v" podStartSLOduration=20.596475506 podStartE2EDuration="20.596475506s" podCreationTimestamp="2024-06-25 18:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:34:59.596288661 +0000 UTC m=+36.215949551" watchObservedRunningTime="2024-06-25 18:34:59.596475506 +0000 UTC m=+36.216136356" Jun 25 18:34:59.616140 kubelet[2521]: I0625 18:34:59.616066 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kzrt7" podStartSLOduration=20.616049765 podStartE2EDuration="20.616049765s" podCreationTimestamp="2024-06-25 18:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:34:59.61589372 +0000 UTC m=+36.235554570" watchObservedRunningTime="2024-06-25 18:34:59.616049765 +0000 UTC m=+36.235710615" Jun 25 18:35:00.591685 kubelet[2521]: E0625 18:35:00.591605 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:00.592013 kubelet[2521]: E0625 18:35:00.591726 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:01.593103 kubelet[2521]: E0625 18:35:01.593050 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:03.462247 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:57888.service - OpenSSH per-connection server daemon (10.0.0.1:57888). Jun 25 18:35:03.504776 sshd[3952]: Accepted publickey for core from 10.0.0.1 port 57888 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:03.506368 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:03.510564 systemd-logind[1414]: New session 10 of user core. Jun 25 18:35:03.520719 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:35:03.637635 sshd[3952]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:03.651296 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:57888.service: Deactivated successfully. Jun 25 18:35:03.652793 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:35:03.654289 systemd-logind[1414]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:35:03.663878 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:57902.service - OpenSSH per-connection server daemon (10.0.0.1:57902). Jun 25 18:35:03.665254 systemd-logind[1414]: Removed session 10. Jun 25 18:35:03.700819 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 57902 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:03.702046 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:03.706500 systemd-logind[1414]: New session 11 of user core. Jun 25 18:35:03.715743 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:35:03.855117 sshd[3968]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:03.866911 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:57902.service: Deactivated successfully. Jun 25 18:35:03.869801 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:35:03.871296 systemd-logind[1414]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:35:03.879498 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:57914.service - OpenSSH per-connection server daemon (10.0.0.1:57914). Jun 25 18:35:03.881836 systemd-logind[1414]: Removed session 11. Jun 25 18:35:03.916643 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 57914 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:03.918066 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:03.922917 systemd-logind[1414]: New session 12 of user core. Jun 25 18:35:03.935755 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:35:04.042228 sshd[3980]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:04.046050 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:57914.service: Deactivated successfully. Jun 25 18:35:04.048180 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:35:04.048920 systemd-logind[1414]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:35:04.049851 systemd-logind[1414]: Removed session 12. Jun 25 18:35:09.058317 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:45398.service - OpenSSH per-connection server daemon (10.0.0.1:45398). Jun 25 18:35:09.097064 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 45398 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:09.098249 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:09.102084 systemd-logind[1414]: New session 13 of user core. Jun 25 18:35:09.114733 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:35:09.218463 sshd[3994]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:09.221483 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:45398.service: Deactivated successfully. Jun 25 18:35:09.223141 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:35:09.223803 systemd-logind[1414]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:35:09.224695 systemd-logind[1414]: Removed session 13. Jun 25 18:35:14.243554 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Jun 25 18:35:14.282422 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:14.283694 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:14.289026 systemd-logind[1414]: New session 14 of user core. Jun 25 18:35:14.306693 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:35:14.422147 sshd[4010]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:14.432000 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:45414.service: Deactivated successfully. Jun 25 18:35:14.433659 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:35:14.435342 systemd-logind[1414]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:35:14.436648 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:45416.service - OpenSSH per-connection server daemon (10.0.0.1:45416). Jun 25 18:35:14.437510 systemd-logind[1414]: Removed session 14. Jun 25 18:35:14.478938 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 45416 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:14.480049 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:14.484014 systemd-logind[1414]: New session 15 of user core. Jun 25 18:35:14.493764 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:35:14.702779 sshd[4024]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:14.712998 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:45416.service: Deactivated successfully. Jun 25 18:35:14.716024 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:35:14.717774 systemd-logind[1414]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:35:14.727985 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:45428.service - OpenSSH per-connection server daemon (10.0.0.1:45428). Jun 25 18:35:14.729939 systemd-logind[1414]: Removed session 15. Jun 25 18:35:14.783619 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 45428 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:14.784399 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:14.788261 systemd-logind[1414]: New session 16 of user core. Jun 25 18:35:14.798722 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:35:16.046816 sshd[4036]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:16.058349 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:45428.service: Deactivated successfully. Jun 25 18:35:16.061434 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:35:16.062853 systemd-logind[1414]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:35:16.068897 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:45438.service - OpenSSH per-connection server daemon (10.0.0.1:45438). Jun 25 18:35:16.070541 systemd-logind[1414]: Removed session 16. Jun 25 18:35:16.106925 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 45438 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:16.108197 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:16.111629 systemd-logind[1414]: New session 17 of user core. Jun 25 18:35:16.122826 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:35:16.336815 sshd[4059]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:16.347620 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:45438.service: Deactivated successfully. Jun 25 18:35:16.349329 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:35:16.351153 systemd-logind[1414]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:35:16.358868 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:45448.service - OpenSSH per-connection server daemon (10.0.0.1:45448). Jun 25 18:35:16.360027 systemd-logind[1414]: Removed session 17. Jun 25 18:35:16.396563 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 45448 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:16.397934 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:16.401605 systemd-logind[1414]: New session 18 of user core. Jun 25 18:35:16.410726 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:35:16.517028 sshd[4072]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:16.519670 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:45448.service: Deactivated successfully. Jun 25 18:35:16.522035 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:35:16.523661 systemd-logind[1414]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:35:16.524576 systemd-logind[1414]: Removed session 18. Jun 25 18:35:21.528017 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:34516.service - OpenSSH per-connection server daemon (10.0.0.1:34516). Jun 25 18:35:21.567114 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 34516 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:21.568982 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:21.574217 systemd-logind[1414]: New session 19 of user core. Jun 25 18:35:21.583837 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:35:21.686032 sshd[4090]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:21.689109 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:34516.service: Deactivated successfully. Jun 25 18:35:21.690851 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:35:21.691499 systemd-logind[1414]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:35:21.692307 systemd-logind[1414]: Removed session 19. Jun 25 18:35:26.697275 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:34528.service - OpenSSH per-connection server daemon (10.0.0.1:34528). Jun 25 18:35:26.736736 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 34528 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:26.738018 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:26.741778 systemd-logind[1414]: New session 20 of user core. Jun 25 18:35:26.752761 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:35:26.861200 sshd[4106]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:26.864880 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:34528.service: Deactivated successfully. Jun 25 18:35:26.867730 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:35:26.868487 systemd-logind[1414]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:35:26.869565 systemd-logind[1414]: Removed session 20. Jun 25 18:35:31.872115 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:47090.service - OpenSSH per-connection server daemon (10.0.0.1:47090). Jun 25 18:35:31.911158 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 47090 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:31.912482 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:31.916254 systemd-logind[1414]: New session 21 of user core. Jun 25 18:35:31.933723 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:35:32.036220 sshd[4122]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:32.053049 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:47090.service: Deactivated successfully. Jun 25 18:35:32.054801 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:35:32.056345 systemd-logind[1414]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:35:32.062006 systemd[1]: Started sshd@21-10.0.0.91:22-10.0.0.1:47102.service - OpenSSH per-connection server daemon (10.0.0.1:47102). Jun 25 18:35:32.063067 systemd-logind[1414]: Removed session 21. Jun 25 18:35:32.096290 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 47102 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:32.097450 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:32.100905 systemd-logind[1414]: New session 22 of user core. Jun 25 18:35:32.110821 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:35:34.554203 containerd[1425]: time="2024-06-25T18:35:34.553417497Z" level=info msg="StopContainer for \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\" with timeout 30 (s)" Jun 25 18:35:34.577699 containerd[1425]: time="2024-06-25T18:35:34.577663556Z" level=info msg="Stop container \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\" with signal terminated" Jun 25 18:35:34.587308 containerd[1425]: time="2024-06-25T18:35:34.586292434Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:35:34.586542 systemd[1]: cri-containerd-6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e.scope: Deactivated successfully. Jun 25 18:35:34.593310 containerd[1425]: time="2024-06-25T18:35:34.593264297Z" level=info msg="StopContainer for \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\" with timeout 2 (s)" Jun 25 18:35:34.593703 containerd[1425]: time="2024-06-25T18:35:34.593672860Z" level=info msg="Stop container \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\" with signal terminated" Jun 25 18:35:34.599565 systemd-networkd[1363]: lxc_health: Link DOWN Jun 25 18:35:34.599985 systemd-networkd[1363]: lxc_health: Lost carrier Jun 25 18:35:34.609065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e-rootfs.mount: Deactivated successfully. Jun 25 18:35:34.615467 containerd[1425]: time="2024-06-25T18:35:34.615416776Z" level=info msg="shim disconnected" id=6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e namespace=k8s.io Jun 25 18:35:34.615467 containerd[1425]: time="2024-06-25T18:35:34.615465937Z" level=warning msg="cleaning up after shim disconnected" id=6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e namespace=k8s.io Jun 25 18:35:34.615752 containerd[1425]: time="2024-06-25T18:35:34.615474217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:34.622961 systemd[1]: cri-containerd-9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9.scope: Deactivated successfully. Jun 25 18:35:34.623204 systemd[1]: cri-containerd-9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9.scope: Consumed 6.390s CPU time. Jun 25 18:35:34.629065 containerd[1425]: time="2024-06-25T18:35:34.627366204Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:35:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:35:34.632333 containerd[1425]: time="2024-06-25T18:35:34.632299968Z" level=info msg="StopContainer for \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\" returns successfully" Jun 25 18:35:34.634340 containerd[1425]: time="2024-06-25T18:35:34.632950094Z" level=info msg="StopPodSandbox for \"5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a\"" Jun 25 18:35:34.635347 containerd[1425]: time="2024-06-25T18:35:34.632993335Z" level=info msg="Container to stop \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:35:34.636872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a-shm.mount: Deactivated successfully. Jun 25 18:35:34.644039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9-rootfs.mount: Deactivated successfully. Jun 25 18:35:34.644763 systemd[1]: cri-containerd-5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a.scope: Deactivated successfully. Jun 25 18:35:34.657381 containerd[1425]: time="2024-06-25T18:35:34.657190193Z" level=info msg="shim disconnected" id=9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9 namespace=k8s.io Jun 25 18:35:34.657381 containerd[1425]: time="2024-06-25T18:35:34.657240633Z" level=warning msg="cleaning up after shim disconnected" id=9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9 namespace=k8s.io Jun 25 18:35:34.657381 containerd[1425]: time="2024-06-25T18:35:34.657249993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:34.668740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a-rootfs.mount: Deactivated successfully. Jun 25 18:35:34.669901 containerd[1425]: time="2024-06-25T18:35:34.669685985Z" level=info msg="shim disconnected" id=5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a namespace=k8s.io Jun 25 18:35:34.670138 containerd[1425]: time="2024-06-25T18:35:34.670008468Z" level=warning msg="cleaning up after shim disconnected" id=5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a namespace=k8s.io Jun 25 18:35:34.670138 containerd[1425]: time="2024-06-25T18:35:34.670024348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:34.671957 containerd[1425]: time="2024-06-25T18:35:34.671857805Z" level=info msg="StopContainer for \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\" returns successfully" Jun 25 18:35:34.672317 containerd[1425]: time="2024-06-25T18:35:34.672280649Z" level=info msg="StopPodSandbox for \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\"" Jun 25 18:35:34.672371 containerd[1425]: time="2024-06-25T18:35:34.672343129Z" level=info msg="Container to stop \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:35:34.672371 containerd[1425]: time="2024-06-25T18:35:34.672356129Z" level=info msg="Container to stop \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:35:34.672371 containerd[1425]: time="2024-06-25T18:35:34.672365850Z" level=info msg="Container to stop \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:35:34.672435 containerd[1425]: time="2024-06-25T18:35:34.672376770Z" level=info msg="Container to stop \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:35:34.672435 containerd[1425]: time="2024-06-25T18:35:34.672387290Z" level=info msg="Container to stop \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:35:34.677237 systemd[1]: cri-containerd-7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b.scope: Deactivated successfully. Jun 25 18:35:34.682869 containerd[1425]: time="2024-06-25T18:35:34.682833344Z" level=info msg="TearDown network for sandbox \"5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a\" successfully" Jun 25 18:35:34.682869 containerd[1425]: time="2024-06-25T18:35:34.682862424Z" level=info msg="StopPodSandbox for \"5f91226ad3beb2027528cd4b4eaff690d7e67d147f6c99917e268c56b803da0a\" returns successfully" Jun 25 18:35:34.700782 containerd[1425]: time="2024-06-25T18:35:34.700715865Z" level=info msg="shim disconnected" id=7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b namespace=k8s.io Jun 25 18:35:34.700782 containerd[1425]: time="2024-06-25T18:35:34.700769626Z" level=warning msg="cleaning up after shim disconnected" id=7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b namespace=k8s.io Jun 25 18:35:34.700782 containerd[1425]: time="2024-06-25T18:35:34.700778146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:34.712263 containerd[1425]: time="2024-06-25T18:35:34.712211009Z" level=info msg="TearDown network for sandbox \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" successfully" Jun 25 18:35:34.712263 containerd[1425]: time="2024-06-25T18:35:34.712248769Z" level=info msg="StopPodSandbox for \"7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b\" returns successfully" Jun 25 18:35:34.715327 kubelet[2521]: I0625 18:35:34.715292 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbcff9f6-408a-4d43-8e97-2002aa791158-cilium-config-path\") pod \"fbcff9f6-408a-4d43-8e97-2002aa791158\" (UID: \"fbcff9f6-408a-4d43-8e97-2002aa791158\") " Jun 25 18:35:34.715697 kubelet[2521]: I0625 18:35:34.715333 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgwlg\" (UniqueName: \"kubernetes.io/projected/fbcff9f6-408a-4d43-8e97-2002aa791158-kube-api-access-hgwlg\") pod \"fbcff9f6-408a-4d43-8e97-2002aa791158\" (UID: \"fbcff9f6-408a-4d43-8e97-2002aa791158\") " Jun 25 18:35:34.717225 kubelet[2521]: I0625 18:35:34.717183 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbcff9f6-408a-4d43-8e97-2002aa791158-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fbcff9f6-408a-4d43-8e97-2002aa791158" (UID: "fbcff9f6-408a-4d43-8e97-2002aa791158"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:35:34.718467 kubelet[2521]: I0625 18:35:34.718435 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcff9f6-408a-4d43-8e97-2002aa791158-kube-api-access-hgwlg" (OuterVolumeSpecName: "kube-api-access-hgwlg") pod "fbcff9f6-408a-4d43-8e97-2002aa791158" (UID: "fbcff9f6-408a-4d43-8e97-2002aa791158"). InnerVolumeSpecName "kube-api-access-hgwlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:35:34.817285 kubelet[2521]: I0625 18:35:34.816412 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-host-proc-sys-net\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817285 kubelet[2521]: I0625 18:35:34.816452 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cilium-run\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817285 kubelet[2521]: I0625 18:35:34.816473 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-xtables-lock\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817285 kubelet[2521]: I0625 18:35:34.816489 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-lib-modules\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817285 kubelet[2521]: I0625 18:35:34.816511 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-hubble-tls\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817285 kubelet[2521]: I0625 18:35:34.816525 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-bpf-maps\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817510 kubelet[2521]: I0625 18:35:34.816538 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cilium-cgroup\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817510 kubelet[2521]: I0625 18:35:34.816555 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5gjz\" (UniqueName: \"kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-kube-api-access-r5gjz\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817510 kubelet[2521]: I0625 18:35:34.816569 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cni-path\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817510 kubelet[2521]: I0625 18:35:34.816611 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59865495-df12-47ad-bd04-dc5d4830934f-clustermesh-secrets\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817510 kubelet[2521]: I0625 18:35:34.816627 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-etc-cni-netd\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817510 kubelet[2521]: I0625 18:35:34.816645 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-hostproc\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817674 kubelet[2521]: I0625 18:35:34.816661 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-host-proc-sys-kernel\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817674 kubelet[2521]: I0625 18:35:34.816690 2521 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59865495-df12-47ad-bd04-dc5d4830934f-cilium-config-path\") pod \"59865495-df12-47ad-bd04-dc5d4830934f\" (UID: \"59865495-df12-47ad-bd04-dc5d4830934f\") " Jun 25 18:35:34.817674 kubelet[2521]: I0625 18:35:34.816721 2521 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hgwlg\" (UniqueName: \"kubernetes.io/projected/fbcff9f6-408a-4d43-8e97-2002aa791158-kube-api-access-hgwlg\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.817674 kubelet[2521]: I0625 18:35:34.816731 2521 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbcff9f6-408a-4d43-8e97-2002aa791158-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.818188 kubelet[2521]: I0625 18:35:34.817966 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818188 kubelet[2521]: I0625 18:35:34.818018 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818188 kubelet[2521]: I0625 18:35:34.818034 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818188 kubelet[2521]: I0625 18:35:34.818048 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818188 kubelet[2521]: I0625 18:35:34.818061 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818421 kubelet[2521]: I0625 18:35:34.818293 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818421 kubelet[2521]: I0625 18:35:34.818333 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-hostproc" (OuterVolumeSpecName: "hostproc") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818421 kubelet[2521]: I0625 18:35:34.818349 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818421 kubelet[2521]: I0625 18:35:34.818365 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.818648 kubelet[2521]: I0625 18:35:34.818508 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59865495-df12-47ad-bd04-dc5d4830934f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:35:34.818648 kubelet[2521]: I0625 18:35:34.818551 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cni-path" (OuterVolumeSpecName: "cni-path") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:35:34.820711 kubelet[2521]: I0625 18:35:34.820652 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59865495-df12-47ad-bd04-dc5d4830934f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:35:34.820790 kubelet[2521]: I0625 18:35:34.820716 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:35:34.820790 kubelet[2521]: I0625 18:35:34.820746 2521 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-kube-api-access-r5gjz" (OuterVolumeSpecName: "kube-api-access-r5gjz") pod "59865495-df12-47ad-bd04-dc5d4830934f" (UID: "59865495-df12-47ad-bd04-dc5d4830934f"). InnerVolumeSpecName "kube-api-access-r5gjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:35:34.917193 kubelet[2521]: I0625 18:35:34.917154 2521 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917193 kubelet[2521]: I0625 18:35:34.917185 2521 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917193 kubelet[2521]: I0625 18:35:34.917197 2521 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59865495-df12-47ad-bd04-dc5d4830934f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917193 kubelet[2521]: I0625 18:35:34.917206 2521 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917388 kubelet[2521]: I0625 18:35:34.917214 2521 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917388 kubelet[2521]: I0625 18:35:34.917221 2521 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917388 kubelet[2521]: I0625 18:35:34.917229 2521 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917388 kubelet[2521]: I0625 18:35:34.917236 2521 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917388 kubelet[2521]: I0625 18:35:34.917243 2521 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917388 kubelet[2521]: I0625 18:35:34.917250 2521 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917388 kubelet[2521]: I0625 18:35:34.917257 2521 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r5gjz\" (UniqueName: \"kubernetes.io/projected/59865495-df12-47ad-bd04-dc5d4830934f-kube-api-access-r5gjz\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917388 kubelet[2521]: I0625 18:35:34.917265 2521 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917547 kubelet[2521]: I0625 18:35:34.917272 2521 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59865495-df12-47ad-bd04-dc5d4830934f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:34.917547 kubelet[2521]: I0625 18:35:34.917279 2521 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59865495-df12-47ad-bd04-dc5d4830934f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 25 18:35:35.497735 systemd[1]: Removed slice kubepods-besteffort-podfbcff9f6_408a_4d43_8e97_2002aa791158.slice - libcontainer container kubepods-besteffort-podfbcff9f6_408a_4d43_8e97_2002aa791158.slice. Jun 25 18:35:35.499010 systemd[1]: Removed slice kubepods-burstable-pod59865495_df12_47ad_bd04_dc5d4830934f.slice - libcontainer container kubepods-burstable-pod59865495_df12_47ad_bd04_dc5d4830934f.slice. Jun 25 18:35:35.499090 systemd[1]: kubepods-burstable-pod59865495_df12_47ad_bd04_dc5d4830934f.slice: Consumed 6.550s CPU time. Jun 25 18:35:35.572539 systemd[1]: var-lib-kubelet-pods-fbcff9f6\x2d408a\x2d4d43\x2d8e97\x2d2002aa791158-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhgwlg.mount: Deactivated successfully. Jun 25 18:35:35.572672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b-rootfs.mount: Deactivated successfully. Jun 25 18:35:35.572726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7257f01b71ccb6a307d97a30fc70af367ea89b5821bf723efc240fb597d5a74b-shm.mount: Deactivated successfully. Jun 25 18:35:35.572777 systemd[1]: var-lib-kubelet-pods-59865495\x2ddf12\x2d47ad\x2dbd04\x2ddc5d4830934f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr5gjz.mount: Deactivated successfully. Jun 25 18:35:35.572849 systemd[1]: var-lib-kubelet-pods-59865495\x2ddf12\x2d47ad\x2dbd04\x2ddc5d4830934f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 18:35:35.572903 systemd[1]: var-lib-kubelet-pods-59865495\x2ddf12\x2d47ad\x2dbd04\x2ddc5d4830934f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 18:35:35.655100 kubelet[2521]: I0625 18:35:35.654992 2521 scope.go:117] "RemoveContainer" containerID="6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e" Jun 25 18:35:35.658278 containerd[1425]: time="2024-06-25T18:35:35.658213343Z" level=info msg="RemoveContainer for \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\"" Jun 25 18:35:35.662529 containerd[1425]: time="2024-06-25T18:35:35.662488223Z" level=info msg="RemoveContainer for \"6afe7b1273dd5e3637e28687c5beac5e25e4f52ace60cc78caa8265e20af291e\" returns successfully" Jun 25 18:35:35.663295 kubelet[2521]: I0625 18:35:35.663221 2521 scope.go:117] "RemoveContainer" containerID="9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9" Jun 25 18:35:35.665553 containerd[1425]: time="2024-06-25T18:35:35.665515850Z" level=info msg="RemoveContainer for \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\"" Jun 25 18:35:35.668870 containerd[1425]: time="2024-06-25T18:35:35.668834401Z" level=info msg="RemoveContainer for \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\" returns successfully" Jun 25 18:35:35.669239 kubelet[2521]: I0625 18:35:35.669084 2521 scope.go:117] "RemoveContainer" containerID="2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0" Jun 25 18:35:35.670035 containerd[1425]: time="2024-06-25T18:35:35.669999652Z" level=info msg="RemoveContainer for \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\"" Jun 25 18:35:35.672551 containerd[1425]: time="2024-06-25T18:35:35.672065311Z" level=info msg="RemoveContainer for \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\" returns successfully" Jun 25 18:35:35.672661 kubelet[2521]: I0625 18:35:35.672249 2521 scope.go:117] "RemoveContainer" containerID="b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643" Jun 25 18:35:35.673416 containerd[1425]: time="2024-06-25T18:35:35.673366003Z" level=info msg="RemoveContainer for \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\"" Jun 25 18:35:35.676319 containerd[1425]: time="2024-06-25T18:35:35.676277269Z" level=info msg="RemoveContainer for \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\" returns successfully" Jun 25 18:35:35.677419 kubelet[2521]: I0625 18:35:35.677329 2521 scope.go:117] "RemoveContainer" containerID="22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598" Jun 25 18:35:35.678233 containerd[1425]: time="2024-06-25T18:35:35.678206567Z" level=info msg="RemoveContainer for \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\"" Jun 25 18:35:35.680414 containerd[1425]: time="2024-06-25T18:35:35.680373747Z" level=info msg="RemoveContainer for \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\" returns successfully" Jun 25 18:35:35.680561 kubelet[2521]: I0625 18:35:35.680540 2521 scope.go:117] "RemoveContainer" containerID="cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4" Jun 25 18:35:35.681468 containerd[1425]: time="2024-06-25T18:35:35.681443957Z" level=info msg="RemoveContainer for \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\"" Jun 25 18:35:35.683720 containerd[1425]: time="2024-06-25T18:35:35.683686098Z" level=info msg="RemoveContainer for \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\" returns successfully" Jun 25 18:35:35.683845 kubelet[2521]: I0625 18:35:35.683826 2521 scope.go:117] "RemoveContainer" containerID="9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9" Jun 25 18:35:35.688150 containerd[1425]: time="2024-06-25T18:35:35.683994421Z" level=error msg="ContainerStatus for \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\": not found" Jun 25 18:35:35.688322 kubelet[2521]: E0625 18:35:35.688293 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\": not found" containerID="9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9" Jun 25 18:35:35.688634 kubelet[2521]: I0625 18:35:35.688423 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9"} err="failed to get container status \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\": rpc error: code = NotFound desc = an error occurred when try to find container \"9dc67bd3b651d24500f42528cfdd7c7a471032cd7d9bd9542ea1b79962263de9\": not found" Jun 25 18:35:35.688634 kubelet[2521]: I0625 18:35:35.688520 2521 scope.go:117] "RemoveContainer" containerID="2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0" Jun 25 18:35:35.688982 containerd[1425]: time="2024-06-25T18:35:35.688745784Z" level=error msg="ContainerStatus for \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\": not found" Jun 25 18:35:35.689037 kubelet[2521]: E0625 18:35:35.688869 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\": not found" containerID="2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0" Jun 25 18:35:35.689037 kubelet[2521]: I0625 18:35:35.688894 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0"} err="failed to get container status \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c908150d82fc09490f5aa90a06555d4920162d2e7e9cb0bcb323d20905beea0\": not found" Jun 25 18:35:35.689037 kubelet[2521]: I0625 18:35:35.688908 2521 scope.go:117] "RemoveContainer" containerID="b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643" Jun 25 18:35:35.689114 containerd[1425]: time="2024-06-25T18:35:35.689051147Z" level=error msg="ContainerStatus for \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\": not found" Jun 25 18:35:35.689273 kubelet[2521]: E0625 18:35:35.689229 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\": not found" containerID="b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643" Jun 25 18:35:35.689438 kubelet[2521]: I0625 18:35:35.689315 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643"} err="failed to get container status \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\": rpc error: code = NotFound desc = an error occurred when try to find container \"b793417202fa4406adfdb9e40e8a11f0189338c95253615fdb4b86b8fb9cf643\": not found" Jun 25 18:35:35.689438 kubelet[2521]: I0625 18:35:35.689334 2521 scope.go:117] "RemoveContainer" containerID="22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598" Jun 25 18:35:35.689492 containerd[1425]: time="2024-06-25T18:35:35.689452031Z" level=error msg="ContainerStatus for \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\": not found" Jun 25 18:35:35.689727 kubelet[2521]: E0625 18:35:35.689620 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\": not found" containerID="22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598" Jun 25 18:35:35.689727 kubelet[2521]: I0625 18:35:35.689650 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598"} err="failed to get container status \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\": rpc error: code = NotFound desc = an error occurred when try to find container \"22a4d65d149207418ccacfea9bfe93534c2a586d257e8a97cf59e12f916ba598\": not found" Jun 25 18:35:35.689727 kubelet[2521]: I0625 18:35:35.689666 2521 scope.go:117] "RemoveContainer" containerID="cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4" Jun 25 18:35:35.690015 kubelet[2521]: E0625 18:35:35.689876 2521 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\": not found" containerID="cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4" Jun 25 18:35:35.690015 kubelet[2521]: I0625 18:35:35.689898 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4"} err="failed to get container status \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\": not found" Jun 25 18:35:35.690066 containerd[1425]: time="2024-06-25T18:35:35.689771034Z" level=error msg="ContainerStatus for \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfe0e7aa08de9a95c3968b41e7e3bd288a5fef74957b8cb8d456c86d9e4b24a4\": not found" Jun 25 18:35:36.523924 sshd[4137]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:36.536197 systemd[1]: sshd@21-10.0.0.91:22-10.0.0.1:47102.service: Deactivated successfully. Jun 25 18:35:36.537839 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:35:36.538011 systemd[1]: session-22.scope: Consumed 1.789s CPU time. Jun 25 18:35:36.539250 systemd-logind[1414]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:35:36.547903 systemd[1]: Started sshd@22-10.0.0.91:22-10.0.0.1:47118.service - OpenSSH per-connection server daemon (10.0.0.1:47118). Jun 25 18:35:36.548838 systemd-logind[1414]: Removed session 22. Jun 25 18:35:36.586877 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 47118 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:36.588088 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:36.591947 systemd-logind[1414]: New session 23 of user core. Jun 25 18:35:36.598728 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:35:37.465963 sshd[4296]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:37.474066 systemd[1]: sshd@22-10.0.0.91:22-10.0.0.1:47118.service: Deactivated successfully. Jun 25 18:35:37.476292 kubelet[2521]: I0625 18:35:37.476228 2521 topology_manager.go:215] "Topology Admit Handler" podUID="41384f15-2bf0-48f4-a7e5-e89f36c86878" podNamespace="kube-system" podName="cilium-8v5m5" Jun 25 18:35:37.476605 kubelet[2521]: E0625 18:35:37.476369 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbcff9f6-408a-4d43-8e97-2002aa791158" containerName="cilium-operator" Jun 25 18:35:37.476605 kubelet[2521]: E0625 18:35:37.476380 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59865495-df12-47ad-bd04-dc5d4830934f" containerName="mount-bpf-fs" Jun 25 18:35:37.476605 kubelet[2521]: E0625 18:35:37.476388 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59865495-df12-47ad-bd04-dc5d4830934f" containerName="mount-cgroup" Jun 25 18:35:37.476605 kubelet[2521]: E0625 18:35:37.476393 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59865495-df12-47ad-bd04-dc5d4830934f" containerName="apply-sysctl-overwrites" Jun 25 18:35:37.476605 kubelet[2521]: E0625 18:35:37.476399 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59865495-df12-47ad-bd04-dc5d4830934f" containerName="clean-cilium-state" Jun 25 18:35:37.476605 kubelet[2521]: E0625 18:35:37.476405 2521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59865495-df12-47ad-bd04-dc5d4830934f" containerName="cilium-agent" Jun 25 18:35:37.476605 kubelet[2521]: I0625 18:35:37.476426 2521 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcff9f6-408a-4d43-8e97-2002aa791158" containerName="cilium-operator" Jun 25 18:35:37.476605 kubelet[2521]: I0625 18:35:37.476432 2521 memory_manager.go:354] "RemoveStaleState removing state" podUID="59865495-df12-47ad-bd04-dc5d4830934f" containerName="cilium-agent" Jun 25 18:35:37.478443 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:35:37.480524 systemd-logind[1414]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:35:37.490298 systemd[1]: Started sshd@23-10.0.0.91:22-10.0.0.1:47124.service - OpenSSH per-connection server daemon (10.0.0.1:47124). Jun 25 18:35:37.500230 systemd-logind[1414]: Removed session 23. Jun 25 18:35:37.510864 systemd[1]: Created slice kubepods-burstable-pod41384f15_2bf0_48f4_a7e5_e89f36c86878.slice - libcontainer container kubepods-burstable-pod41384f15_2bf0_48f4_a7e5_e89f36c86878.slice. Jun 25 18:35:37.513204 kubelet[2521]: I0625 18:35:37.513163 2521 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59865495-df12-47ad-bd04-dc5d4830934f" path="/var/lib/kubelet/pods/59865495-df12-47ad-bd04-dc5d4830934f/volumes" Jun 25 18:35:37.514777 kubelet[2521]: I0625 18:35:37.513755 2521 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbcff9f6-408a-4d43-8e97-2002aa791158" path="/var/lib/kubelet/pods/fbcff9f6-408a-4d43-8e97-2002aa791158/volumes" Jun 25 18:35:37.531532 kubelet[2521]: I0625 18:35:37.531487 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41384f15-2bf0-48f4-a7e5-e89f36c86878-cilium-config-path\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531752 kubelet[2521]: I0625 18:35:37.531534 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41384f15-2bf0-48f4-a7e5-e89f36c86878-cilium-ipsec-secrets\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531752 kubelet[2521]: I0625 18:35:37.531558 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41384f15-2bf0-48f4-a7e5-e89f36c86878-hubble-tls\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531752 kubelet[2521]: I0625 18:35:37.531576 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lg5\" (UniqueName: \"kubernetes.io/projected/41384f15-2bf0-48f4-a7e5-e89f36c86878-kube-api-access-86lg5\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531752 kubelet[2521]: I0625 18:35:37.531617 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41384f15-2bf0-48f4-a7e5-e89f36c86878-clustermesh-secrets\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531752 kubelet[2521]: I0625 18:35:37.531635 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-hostproc\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531877 kubelet[2521]: I0625 18:35:37.531650 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-host-proc-sys-kernel\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531877 kubelet[2521]: I0625 18:35:37.531664 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-lib-modules\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531877 kubelet[2521]: I0625 18:35:37.531679 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-xtables-lock\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531877 kubelet[2521]: I0625 18:35:37.531693 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-host-proc-sys-net\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531877 kubelet[2521]: I0625 18:35:37.531709 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-bpf-maps\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531877 kubelet[2521]: I0625 18:35:37.531725 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-cilium-run\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531994 kubelet[2521]: I0625 18:35:37.531741 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-cilium-cgroup\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531994 kubelet[2521]: I0625 18:35:37.531756 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-cni-path\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.531994 kubelet[2521]: I0625 18:35:37.531771 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41384f15-2bf0-48f4-a7e5-e89f36c86878-etc-cni-netd\") pod \"cilium-8v5m5\" (UID: \"41384f15-2bf0-48f4-a7e5-e89f36c86878\") " pod="kube-system/cilium-8v5m5" Jun 25 18:35:37.545982 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 47124 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:37.547629 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:37.551607 systemd-logind[1414]: New session 24 of user core. Jun 25 18:35:37.561767 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:35:37.611085 sshd[4309]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:37.620142 systemd[1]: sshd@23-10.0.0.91:22-10.0.0.1:47124.service: Deactivated successfully. Jun 25 18:35:37.621810 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:35:37.623034 systemd-logind[1414]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:35:37.624276 systemd[1]: Started sshd@24-10.0.0.91:22-10.0.0.1:47138.service - OpenSSH per-connection server daemon (10.0.0.1:47138). Jun 25 18:35:37.625025 systemd-logind[1414]: Removed session 24. Jun 25 18:35:37.665359 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 47138 ssh2: RSA SHA256:PTHQXr0iRYYg3MbKKJZ6aC6iEkqmHU1AdffEoJcWF3A Jun 25 18:35:37.666850 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:37.670330 systemd-logind[1414]: New session 25 of user core. Jun 25 18:35:37.681740 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:35:37.816125 kubelet[2521]: E0625 18:35:37.815762 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:37.816305 containerd[1425]: time="2024-06-25T18:35:37.816252338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8v5m5,Uid:41384f15-2bf0-48f4-a7e5-e89f36c86878,Namespace:kube-system,Attempt:0,}" Jun 25 18:35:37.832520 containerd[1425]: time="2024-06-25T18:35:37.832213211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:35:37.832520 containerd[1425]: time="2024-06-25T18:35:37.832271732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:35:37.832520 containerd[1425]: time="2024-06-25T18:35:37.832291532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:35:37.832520 containerd[1425]: time="2024-06-25T18:35:37.832306092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:35:37.851774 systemd[1]: Started cri-containerd-fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb.scope - libcontainer container fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb. Jun 25 18:35:37.872301 containerd[1425]: time="2024-06-25T18:35:37.872189074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8v5m5,Uid:41384f15-2bf0-48f4-a7e5-e89f36c86878,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\"" Jun 25 18:35:37.873681 kubelet[2521]: E0625 18:35:37.873137 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:37.878260 containerd[1425]: time="2024-06-25T18:35:37.877332443Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:35:37.887949 containerd[1425]: time="2024-06-25T18:35:37.887846504Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f98059d131dcde65d7d103a12550a25d8471562013e3f4f0b81a29a40f9fe80\"" Jun 25 18:35:37.888360 containerd[1425]: time="2024-06-25T18:35:37.888335149Z" level=info msg="StartContainer for \"9f98059d131dcde65d7d103a12550a25d8471562013e3f4f0b81a29a40f9fe80\"" Jun 25 18:35:37.912727 systemd[1]: Started cri-containerd-9f98059d131dcde65d7d103a12550a25d8471562013e3f4f0b81a29a40f9fe80.scope - libcontainer container 9f98059d131dcde65d7d103a12550a25d8471562013e3f4f0b81a29a40f9fe80. Jun 25 18:35:37.939911 containerd[1425]: time="2024-06-25T18:35:37.939872562Z" level=info msg="StartContainer for \"9f98059d131dcde65d7d103a12550a25d8471562013e3f4f0b81a29a40f9fe80\" returns successfully" Jun 25 18:35:37.953178 systemd[1]: cri-containerd-9f98059d131dcde65d7d103a12550a25d8471562013e3f4f0b81a29a40f9fe80.scope: Deactivated successfully. Jun 25 18:35:37.991020 containerd[1425]: time="2024-06-25T18:35:37.990966491Z" level=info msg="shim disconnected" id=9f98059d131dcde65d7d103a12550a25d8471562013e3f4f0b81a29a40f9fe80 namespace=k8s.io Jun 25 18:35:37.991020 containerd[1425]: time="2024-06-25T18:35:37.991015212Z" level=warning msg="cleaning up after shim disconnected" id=9f98059d131dcde65d7d103a12550a25d8471562013e3f4f0b81a29a40f9fe80 namespace=k8s.io Jun 25 18:35:37.991020 containerd[1425]: time="2024-06-25T18:35:37.991023652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:38.535348 kubelet[2521]: E0625 18:35:38.535296 2521 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:35:38.669769 kubelet[2521]: E0625 18:35:38.669567 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:38.672614 containerd[1425]: time="2024-06-25T18:35:38.672292132Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:35:38.683547 containerd[1425]: time="2024-06-25T18:35:38.683508121Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1\"" Jun 25 18:35:38.684458 containerd[1425]: time="2024-06-25T18:35:38.684404370Z" level=info msg="StartContainer for \"2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1\"" Jun 25 18:35:38.716759 systemd[1]: Started cri-containerd-2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1.scope - libcontainer container 2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1. Jun 25 18:35:38.743448 containerd[1425]: time="2024-06-25T18:35:38.743396225Z" level=info msg="StartContainer for \"2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1\" returns successfully" Jun 25 18:35:38.750057 systemd[1]: cri-containerd-2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1.scope: Deactivated successfully. Jun 25 18:35:38.775240 containerd[1425]: time="2024-06-25T18:35:38.775153295Z" level=info msg="shim disconnected" id=2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1 namespace=k8s.io Jun 25 18:35:38.775240 containerd[1425]: time="2024-06-25T18:35:38.775219815Z" level=warning msg="cleaning up after shim disconnected" id=2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1 namespace=k8s.io Jun 25 18:35:38.775240 containerd[1425]: time="2024-06-25T18:35:38.775228495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:39.637736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cc403e774617cec015795b71e6a9ac6e6efbc013e925fd7d5343e8342ab84c1-rootfs.mount: Deactivated successfully. Jun 25 18:35:39.673052 kubelet[2521]: E0625 18:35:39.672862 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:39.674501 containerd[1425]: time="2024-06-25T18:35:39.674445616Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:35:39.690424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818583286.mount: Deactivated successfully. Jun 25 18:35:39.691381 containerd[1425]: time="2024-06-25T18:35:39.691329024Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b\"" Jun 25 18:35:39.691864 containerd[1425]: time="2024-06-25T18:35:39.691825309Z" level=info msg="StartContainer for \"7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b\"" Jun 25 18:35:39.717751 systemd[1]: Started cri-containerd-7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b.scope - libcontainer container 7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b. Jun 25 18:35:39.753383 containerd[1425]: time="2024-06-25T18:35:39.753330839Z" level=info msg="StartContainer for \"7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b\" returns successfully" Jun 25 18:35:39.756716 systemd[1]: cri-containerd-7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b.scope: Deactivated successfully. Jun 25 18:35:39.777776 containerd[1425]: time="2024-06-25T18:35:39.777721201Z" level=info msg="shim disconnected" id=7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b namespace=k8s.io Jun 25 18:35:39.777776 containerd[1425]: time="2024-06-25T18:35:39.777772961Z" level=warning msg="cleaning up after shim disconnected" id=7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b namespace=k8s.io Jun 25 18:35:39.777776 containerd[1425]: time="2024-06-25T18:35:39.777781121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:40.637778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d624937dfe3375eb8cb478e7763de695b4b7f8be5e62fcffdd453fedc68864b-rootfs.mount: Deactivated successfully. Jun 25 18:35:40.675615 kubelet[2521]: E0625 18:35:40.675508 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:40.678669 containerd[1425]: time="2024-06-25T18:35:40.678629848Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:35:40.687973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235838639.mount: Deactivated successfully. Jun 25 18:35:40.690233 containerd[1425]: time="2024-06-25T18:35:40.690187924Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47\"" Jun 25 18:35:40.692447 containerd[1425]: time="2024-06-25T18:35:40.691390856Z" level=info msg="StartContainer for \"71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47\"" Jun 25 18:35:40.715727 systemd[1]: Started cri-containerd-71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47.scope - libcontainer container 71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47. Jun 25 18:35:40.732438 systemd[1]: cri-containerd-71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47.scope: Deactivated successfully. Jun 25 18:35:40.739191 containerd[1425]: time="2024-06-25T18:35:40.737962606Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41384f15_2bf0_48f4_a7e5_e89f36c86878.slice/cri-containerd-71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47.scope/memory.events\": no such file or directory" Jun 25 18:35:40.740830 containerd[1425]: time="2024-06-25T18:35:40.740786154Z" level=info msg="StartContainer for \"71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47\" returns successfully" Jun 25 18:35:40.765205 containerd[1425]: time="2024-06-25T18:35:40.765148960Z" level=info msg="shim disconnected" id=71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47 namespace=k8s.io Jun 25 18:35:40.765205 containerd[1425]: time="2024-06-25T18:35:40.765201601Z" level=warning msg="cleaning up after shim disconnected" id=71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47 namespace=k8s.io Jun 25 18:35:40.765205 containerd[1425]: time="2024-06-25T18:35:40.765210361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:35:41.637900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71f1e5793d315fa111ecbeba7435e298ff6d899fd7150b0c175ca29abfe7fa47-rootfs.mount: Deactivated successfully. Jun 25 18:35:41.678904 kubelet[2521]: E0625 18:35:41.678881 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:41.684943 containerd[1425]: time="2024-06-25T18:35:41.684872342Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:35:41.699941 containerd[1425]: time="2024-06-25T18:35:41.699881696Z" level=info msg="CreateContainer within sandbox \"fd453c04627055b62f267d956d751c90973b093b0dfba76642d5876697c92bbb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d0fded18c121aa8256037ed85bc42b552e0a1f39b87e8e14ad63b703c4cf501\"" Jun 25 18:35:41.700476 containerd[1425]: time="2024-06-25T18:35:41.700439302Z" level=info msg="StartContainer for \"2d0fded18c121aa8256037ed85bc42b552e0a1f39b87e8e14ad63b703c4cf501\"" Jun 25 18:35:41.726723 systemd[1]: Started cri-containerd-2d0fded18c121aa8256037ed85bc42b552e0a1f39b87e8e14ad63b703c4cf501.scope - libcontainer container 2d0fded18c121aa8256037ed85bc42b552e0a1f39b87e8e14ad63b703c4cf501. Jun 25 18:35:41.754222 containerd[1425]: time="2024-06-25T18:35:41.754180092Z" level=info msg="StartContainer for \"2d0fded18c121aa8256037ed85bc42b552e0a1f39b87e8e14ad63b703c4cf501\" returns successfully" Jun 25 18:35:42.026623 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 25 18:35:42.683510 kubelet[2521]: E0625 18:35:42.683264 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:42.696870 kubelet[2521]: I0625 18:35:42.696800 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8v5m5" podStartSLOduration=5.696785574 podStartE2EDuration="5.696785574s" podCreationTimestamp="2024-06-25 18:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:35:42.696632412 +0000 UTC m=+79.316293302" watchObservedRunningTime="2024-06-25 18:35:42.696785574 +0000 UTC m=+79.316446384" Jun 25 18:35:43.492508 kubelet[2521]: E0625 18:35:43.492156 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:43.817664 kubelet[2521]: E0625 18:35:43.817536 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:44.491184 kubelet[2521]: E0625 18:35:44.491112 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:44.799393 systemd-networkd[1363]: lxc_health: Link UP Jun 25 18:35:44.812018 systemd-networkd[1363]: lxc_health: Gained carrier Jun 25 18:35:45.818656 kubelet[2521]: E0625 18:35:45.817505 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:46.107610 systemd[1]: run-containerd-runc-k8s.io-2d0fded18c121aa8256037ed85bc42b552e0a1f39b87e8e14ad63b703c4cf501-runc.okXBSQ.mount: Deactivated successfully. Jun 25 18:35:46.151680 systemd-networkd[1363]: lxc_health: Gained IPv6LL Jun 25 18:35:46.691176 kubelet[2521]: E0625 18:35:46.691125 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:47.692777 kubelet[2521]: E0625 18:35:47.692745 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:50.450979 sshd[4317]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:50.455508 systemd[1]: sshd@24-10.0.0.91:22-10.0.0.1:47138.service: Deactivated successfully. Jun 25 18:35:50.457255 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:35:50.458960 systemd-logind[1414]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:35:50.460394 systemd-logind[1414]: Removed session 25. Jun 25 18:35:50.490705 kubelet[2521]: E0625 18:35:50.490682 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:35:51.491475 kubelet[2521]: E0625 18:35:51.491435 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"