Oct 8 19:45:58.913652 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:45:58.913673 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:45:58.913682 kernel: KASLR enabled Oct 8 19:45:58.913688 kernel: efi: EFI v2.7 by EDK II Oct 8 19:45:58.913694 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 19:45:58.913699 kernel: random: crng init done Oct 8 19:45:58.913706 kernel: ACPI: Early table checksum verification disabled Oct 8 19:45:58.913712 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 19:45:58.913718 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:45:58.913725 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913731 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913737 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913743 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913749 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913757 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913764 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913770 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913777 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:58.913783 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 19:45:58.913789 kernel: NUMA: Failed to initialise from firmware Oct 8 19:45:58.913796 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:45:58.913802 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 8 19:45:58.913808 kernel: Zone ranges: Oct 8 19:45:58.913814 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:45:58.913821 kernel: DMA32 empty Oct 8 19:45:58.913828 kernel: Normal empty Oct 8 19:45:58.913834 kernel: Movable zone start for each node Oct 8 19:45:58.913841 kernel: Early memory node ranges Oct 8 19:45:58.913847 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 19:45:58.913853 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 19:45:58.913860 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 19:45:58.913866 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 19:45:58.913873 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 19:45:58.913879 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 19:45:58.913886 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 19:45:58.913892 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:45:58.913898 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 19:45:58.913906 kernel: psci: probing for conduit method from ACPI. Oct 8 19:45:58.913912 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:45:58.913919 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:45:58.913928 kernel: psci: Trusted OS migration not required Oct 8 19:45:58.913935 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:45:58.913942 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:45:58.913950 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:45:58.913956 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:45:58.913963 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 19:45:58.913970 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:45:58.913977 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:45:58.913983 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:45:58.913990 kernel: CPU features: detected: Spectre-v4 Oct 8 19:45:58.913997 kernel: CPU features: detected: Spectre-BHB Oct 8 19:45:58.914004 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:45:58.914010 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:45:58.914018 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:45:58.914025 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:45:58.914032 kernel: alternatives: applying boot alternatives Oct 8 19:45:58.914040 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:45:58.914047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:45:58.914053 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:45:58.914060 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:45:58.914067 kernel: Fallback order for Node 0: 0 Oct 8 19:45:58.914074 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 19:45:58.914080 kernel: Policy zone: DMA Oct 8 19:45:58.914087 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:45:58.914095 kernel: software IO TLB: area num 4. Oct 8 19:45:58.914101 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 19:45:58.914108 kernel: Memory: 2386788K/2572288K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 185500K reserved, 0K cma-reserved) Oct 8 19:45:58.914116 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:45:58.914122 kernel: trace event string verifier disabled Oct 8 19:45:58.914129 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:45:58.914136 kernel: rcu: RCU event tracing is enabled. Oct 8 19:45:58.914143 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:45:58.914150 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:45:58.914157 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:45:58.914163 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:45:58.914170 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:45:58.914178 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:45:58.914185 kernel: GICv3: 256 SPIs implemented Oct 8 19:45:58.914191 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:45:58.914198 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:45:58.914204 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:45:58.914211 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:45:58.914218 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:45:58.914225 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:45:58.914232 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:45:58.914238 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 19:45:58.914245 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 19:45:58.914253 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:45:58.914259 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:45:58.914266 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:45:58.914273 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:45:58.914280 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:45:58.914287 kernel: arm-pv: using stolen time PV Oct 8 19:45:58.914294 kernel: Console: colour dummy device 80x25 Oct 8 19:45:58.914301 kernel: ACPI: Core revision 20230628 Oct 8 19:45:58.914308 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:45:58.914315 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:45:58.914323 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:45:58.914335 kernel: SELinux: Initializing. Oct 8 19:45:58.914342 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:45:58.914349 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:45:58.914356 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:45:58.914364 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:45:58.914370 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:45:58.914377 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:45:58.914384 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:45:58.914393 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:45:58.914399 kernel: Remapping and enabling EFI services. Oct 8 19:45:58.914406 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:45:58.914479 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:45:58.914487 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:45:58.914494 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 19:45:58.914501 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:45:58.914508 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:45:58.914515 kernel: Detected PIPT I-cache on CPU2 Oct 8 19:45:58.914522 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 19:45:58.914532 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 19:45:58.914539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:45:58.914550 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 19:45:58.914558 kernel: Detected PIPT I-cache on CPU3 Oct 8 19:45:58.914565 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 19:45:58.914572 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 19:45:58.914580 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:45:58.914587 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 19:45:58.914594 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:45:58.914603 kernel: SMP: Total of 4 processors activated. Oct 8 19:45:58.914610 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:45:58.914617 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:45:58.914625 kernel: CPU features: detected: Common not Private translations Oct 8 19:45:58.914632 kernel: CPU features: detected: CRC32 instructions Oct 8 19:45:58.914639 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:45:58.914646 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:45:58.914653 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:45:58.914662 kernel: CPU features: detected: Privileged Access Never Oct 8 19:45:58.914669 kernel: CPU features: detected: RAS Extension Support Oct 8 19:45:58.914676 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:45:58.914684 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:45:58.914691 kernel: alternatives: applying system-wide alternatives Oct 8 19:45:58.914698 kernel: devtmpfs: initialized Oct 8 19:45:58.914705 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:45:58.914713 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:45:58.914720 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:45:58.914728 kernel: SMBIOS 3.0.0 present. Oct 8 19:45:58.914736 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 19:45:58.914743 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:45:58.914750 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:45:58.914758 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:45:58.914765 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:45:58.914772 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:45:58.914780 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Oct 8 19:45:58.914787 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:45:58.914795 kernel: cpuidle: using governor menu Oct 8 19:45:58.914803 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:45:58.914810 kernel: ASID allocator initialised with 32768 entries Oct 8 19:45:58.914817 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:45:58.914825 kernel: Serial: AMBA PL011 UART driver Oct 8 19:45:58.914832 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:45:58.914839 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:45:58.914846 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:45:58.914854 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:45:58.914862 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:45:58.914870 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:45:58.914877 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:45:58.914884 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:45:58.914892 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:45:58.914899 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:45:58.914906 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:45:58.914913 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:45:58.914921 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:45:58.914929 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:45:58.914936 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:45:58.914944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:45:58.914951 kernel: ACPI: Interpreter enabled Oct 8 19:45:58.914959 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:45:58.914966 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:45:58.914973 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:45:58.914980 kernel: printk: console [ttyAMA0] enabled Oct 8 19:45:58.914987 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:45:58.915117 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:45:58.915189 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:45:58.915253 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:45:58.915314 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:45:58.915386 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:45:58.915396 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:45:58.915403 kernel: PCI host bridge to bus 0000:00 Oct 8 19:45:58.915484 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:45:58.915543 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:45:58.915600 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:45:58.915657 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:45:58.915735 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:45:58.915808 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:45:58.915873 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 19:45:58.915939 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 19:45:58.916003 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:45:58.916067 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:45:58.916130 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 19:45:58.916194 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 19:45:58.916253 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:45:58.916311 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:45:58.916382 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:45:58.916393 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:45:58.916404 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:45:58.916426 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:45:58.916438 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:45:58.916446 kernel: iommu: Default domain type: Translated Oct 8 19:45:58.916454 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:45:58.916463 kernel: efivars: Registered efivars operations Oct 8 19:45:58.916476 kernel: vgaarb: loaded Oct 8 19:45:58.916485 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:45:58.916492 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:45:58.916500 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:45:58.916507 kernel: pnp: PnP ACPI init Oct 8 19:45:58.916600 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:45:58.916611 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:45:58.916619 kernel: NET: Registered PF_INET protocol family Oct 8 19:45:58.916628 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:45:58.916635 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:45:58.916643 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:45:58.916650 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:45:58.916658 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:45:58.916665 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:45:58.916672 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:45:58.916680 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:45:58.916687 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:45:58.916696 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:45:58.916703 kernel: kvm [1]: HYP mode not available Oct 8 19:45:58.916710 kernel: Initialise system trusted keyrings Oct 8 19:45:58.916717 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:45:58.916725 kernel: Key type asymmetric registered Oct 8 19:45:58.916732 kernel: Asymmetric key parser 'x509' registered Oct 8 19:45:58.916739 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:45:58.916747 kernel: io scheduler mq-deadline registered Oct 8 19:45:58.916754 kernel: io scheduler kyber registered Oct 8 19:45:58.916762 kernel: io scheduler bfq registered Oct 8 19:45:58.916769 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:45:58.916777 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:45:58.916784 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:45:58.916850 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 19:45:58.916860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:45:58.916867 kernel: thunder_xcv, ver 1.0 Oct 8 19:45:58.916875 kernel: thunder_bgx, ver 1.0 Oct 8 19:45:58.916882 kernel: nicpf, ver 1.0 Oct 8 19:45:58.916891 kernel: nicvf, ver 1.0 Oct 8 19:45:58.916961 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:45:58.917021 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:45:58 UTC (1728416758) Oct 8 19:45:58.917031 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:45:58.917038 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:45:58.917046 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:45:58.917053 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:45:58.917060 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:45:58.917069 kernel: Segment Routing with IPv6 Oct 8 19:45:58.917076 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:45:58.917084 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:45:58.917091 kernel: Key type dns_resolver registered Oct 8 19:45:58.917098 kernel: registered taskstats version 1 Oct 8 19:45:58.917105 kernel: Loading compiled-in X.509 certificates Oct 8 19:45:58.917113 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:45:58.917120 kernel: Key type .fscrypt registered Oct 8 19:45:58.917127 kernel: Key type fscrypt-provisioning registered Oct 8 19:45:58.917136 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:45:58.917143 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:45:58.917150 kernel: ima: No architecture policies found Oct 8 19:45:58.917158 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:45:58.917165 kernel: clk: Disabling unused clocks Oct 8 19:45:58.917172 kernel: Freeing unused kernel memory: 39104K Oct 8 19:45:58.917180 kernel: Run /init as init process Oct 8 19:45:58.917187 kernel: with arguments: Oct 8 19:45:58.917194 kernel: /init Oct 8 19:45:58.917203 kernel: with environment: Oct 8 19:45:58.917210 kernel: HOME=/ Oct 8 19:45:58.917217 kernel: TERM=linux Oct 8 19:45:58.917224 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:45:58.917233 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:45:58.917242 systemd[1]: Detected virtualization kvm. Oct 8 19:45:58.917250 systemd[1]: Detected architecture arm64. Oct 8 19:45:58.917258 systemd[1]: Running in initrd. Oct 8 19:45:58.917266 systemd[1]: No hostname configured, using default hostname. Oct 8 19:45:58.917274 systemd[1]: Hostname set to . Oct 8 19:45:58.917282 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:45:58.917289 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:45:58.917297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:45:58.917305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:45:58.917313 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:45:58.917321 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:45:58.917336 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:45:58.917344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:45:58.917354 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:45:58.917361 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:45:58.917369 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:45:58.917377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:45:58.917385 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:45:58.917394 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:45:58.917402 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:45:58.917418 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:45:58.917427 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:45:58.917435 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:45:58.917443 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:45:58.917450 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:45:58.917458 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:45:58.917468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:45:58.917476 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:45:58.917484 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:45:58.917492 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:45:58.917499 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:45:58.917507 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:45:58.917515 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:45:58.917522 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:45:58.917530 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:45:58.917539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:58.917547 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:45:58.917555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:45:58.917562 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:45:58.917571 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:45:58.917580 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:45:58.917588 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:45:58.917613 systemd-journald[238]: Collecting audit messages is disabled. Oct 8 19:45:58.917633 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:58.917641 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:45:58.917649 systemd-journald[238]: Journal started Oct 8 19:45:58.917667 systemd-journald[238]: Runtime Journal (/run/log/journal/2f45fc1b0a1447edbc5d723e3d1fc3f4) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:45:58.921508 kernel: Bridge firewalling registered Oct 8 19:45:58.901681 systemd-modules-load[239]: Inserted module 'overlay' Oct 8 19:45:58.920195 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 8 19:45:58.926749 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:58.926769 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:45:58.927936 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:45:58.931799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:45:58.933546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:45:58.937278 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:45:58.941596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:58.945617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:45:58.946817 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:45:58.956563 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:45:58.958822 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:45:58.968172 dracut-cmdline[276]: dracut-dracut-053 Oct 8 19:45:58.971846 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:45:58.985755 systemd-resolved[280]: Positive Trust Anchors: Oct 8 19:45:58.985776 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:45:58.985806 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:45:58.990458 systemd-resolved[280]: Defaulting to hostname 'linux'. Oct 8 19:45:58.991397 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:45:58.994582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:45:59.036441 kernel: SCSI subsystem initialized Oct 8 19:45:59.041427 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:45:59.048433 kernel: iscsi: registered transport (tcp) Oct 8 19:45:59.063551 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:45:59.063598 kernel: QLogic iSCSI HBA Driver Oct 8 19:45:59.106657 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:45:59.119602 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:45:59.134565 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:45:59.134608 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:45:59.135616 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:45:59.182437 kernel: raid6: neonx8 gen() 15760 MB/s Oct 8 19:45:59.199431 kernel: raid6: neonx4 gen() 15653 MB/s Oct 8 19:45:59.216681 kernel: raid6: neonx2 gen() 12965 MB/s Oct 8 19:45:59.233446 kernel: raid6: neonx1 gen() 9846 MB/s Oct 8 19:45:59.250436 kernel: raid6: int64x8 gen() 6955 MB/s Oct 8 19:45:59.267456 kernel: raid6: int64x4 gen() 7349 MB/s Oct 8 19:45:59.284440 kernel: raid6: int64x2 gen() 6117 MB/s Oct 8 19:45:59.301521 kernel: raid6: int64x1 gen() 5056 MB/s Oct 8 19:45:59.301582 kernel: raid6: using algorithm neonx8 gen() 15760 MB/s Oct 8 19:45:59.319535 kernel: raid6: .... xor() 11934 MB/s, rmw enabled Oct 8 19:45:59.319591 kernel: raid6: using neon recovery algorithm Oct 8 19:45:59.324431 kernel: xor: measuring software checksum speed Oct 8 19:45:59.324448 kernel: 8regs : 19066 MB/sec Oct 8 19:45:59.325555 kernel: 32regs : 19636 MB/sec Oct 8 19:45:59.326767 kernel: arm64_neon : 26901 MB/sec Oct 8 19:45:59.326779 kernel: xor: using function: arm64_neon (26901 MB/sec) Oct 8 19:45:59.377433 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:45:59.387464 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:45:59.397563 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:45:59.410447 systemd-udevd[463]: Using default interface naming scheme 'v255'. Oct 8 19:45:59.413525 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:45:59.420535 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:45:59.431014 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Oct 8 19:45:59.455241 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:45:59.462550 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:45:59.499277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:45:59.505554 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:45:59.516196 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:45:59.517818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:45:59.519756 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:45:59.522109 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:45:59.529571 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:45:59.539474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:45:59.550369 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 19:45:59.550637 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:45:59.551069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:45:59.556936 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:45:59.556956 kernel: GPT:9289727 != 19775487 Oct 8 19:45:59.556966 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:45:59.551178 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:59.561287 kernel: GPT:9289727 != 19775487 Oct 8 19:45:59.561306 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:45:59.561316 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:45:59.561330 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:59.562425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:45:59.562562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:59.565798 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:59.575429 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (521) Oct 8 19:45:59.577624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:59.580075 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (508) Oct 8 19:45:59.584840 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:45:59.589466 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:59.594296 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:45:59.605047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:45:59.608835 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:45:59.609975 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:45:59.623591 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:45:59.628399 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:59.630630 disk-uuid[550]: Primary Header is updated. Oct 8 19:45:59.630630 disk-uuid[550]: Secondary Entries is updated. Oct 8 19:45:59.630630 disk-uuid[550]: Secondary Header is updated. Oct 8 19:45:59.636424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:45:59.647862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:46:00.645315 disk-uuid[551]: The operation has completed successfully. Oct 8 19:46:00.646947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:46:00.670365 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:46:00.670490 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:46:00.696619 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:46:00.699442 sh[574]: Success Oct 8 19:46:00.710439 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:46:00.752836 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:46:00.754601 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:46:00.755539 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:46:00.767425 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:46:00.767469 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:46:00.767490 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:46:00.767517 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:46:00.768834 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:46:00.772161 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:46:00.773478 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:46:00.782557 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:46:00.783954 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:46:00.791309 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:46:00.791350 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:46:00.791361 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:46:00.794435 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:46:00.800921 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:46:00.802936 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:46:00.807437 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:46:00.813534 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:46:00.872704 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:46:00.885607 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:46:00.906785 ignition[667]: Ignition 2.18.0 Oct 8 19:46:00.906796 ignition[667]: Stage: fetch-offline Oct 8 19:46:00.906838 ignition[667]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:00.906846 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:46:00.908315 systemd-networkd[765]: lo: Link UP Oct 8 19:46:00.906934 ignition[667]: parsed url from cmdline: "" Oct 8 19:46:00.908319 systemd-networkd[765]: lo: Gained carrier Oct 8 19:46:00.906937 ignition[667]: no config URL provided Oct 8 19:46:00.908963 systemd-networkd[765]: Enumeration completed Oct 8 19:46:00.906942 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:46:00.909347 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:46:00.906949 ignition[667]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:46:00.909363 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:00.906978 ignition[667]: op(1): [started] loading QEMU firmware config module Oct 8 19:46:00.909366 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:46:00.906982 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:46:00.910151 systemd-networkd[765]: eth0: Link UP Oct 8 19:46:00.916454 ignition[667]: op(1): [finished] loading QEMU firmware config module Oct 8 19:46:00.910154 systemd-networkd[765]: eth0: Gained carrier Oct 8 19:46:00.910160 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:00.911780 systemd[1]: Reached target network.target - Network. Oct 8 19:46:00.928449 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:46:00.962915 ignition[667]: parsing config with SHA512: 46e0fd7f05599f0f4db34287d707fe12b1d3218286586e2aae2c68866764e81a581593636ecd1b591b475d4fe60053267dcda3cba8bca3c4af8320eae16f4a1d Oct 8 19:46:00.967086 unknown[667]: fetched base config from "system" Oct 8 19:46:00.967098 unknown[667]: fetched user config from "qemu" Oct 8 19:46:00.967699 ignition[667]: fetch-offline: fetch-offline passed Oct 8 19:46:00.967759 ignition[667]: Ignition finished successfully Oct 8 19:46:00.970175 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:46:00.971550 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:46:00.977592 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:46:00.987784 ignition[774]: Ignition 2.18.0 Oct 8 19:46:00.987793 ignition[774]: Stage: kargs Oct 8 19:46:00.987927 ignition[774]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:00.987936 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:46:00.988783 ignition[774]: kargs: kargs passed Oct 8 19:46:00.988823 ignition[774]: Ignition finished successfully Oct 8 19:46:00.993363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:46:01.003554 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:46:01.013135 ignition[783]: Ignition 2.18.0 Oct 8 19:46:01.014151 ignition[783]: Stage: disks Oct 8 19:46:01.014316 ignition[783]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:01.014336 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:46:01.015245 ignition[783]: disks: disks passed Oct 8 19:46:01.016636 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:46:01.015286 ignition[783]: Ignition finished successfully Oct 8 19:46:01.018273 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:46:01.019617 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:46:01.021246 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:46:01.023119 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:46:01.025025 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:46:01.037610 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:46:01.048297 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:46:01.052264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:46:01.065566 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:46:01.111436 kernel: EXT4-fs (vda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:46:01.112051 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:46:01.113284 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:46:01.130487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:46:01.132173 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:46:01.133329 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:46:01.133452 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:46:01.133481 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:46:01.141709 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Oct 8 19:46:01.139841 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:46:01.145919 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:46:01.145938 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:46:01.145949 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:46:01.141571 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:46:01.148425 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:46:01.150600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:46:01.200344 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:46:01.203931 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:46:01.208474 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:46:01.212279 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:46:01.287237 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:46:01.301519 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:46:01.303059 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:46:01.309429 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:46:01.325785 ignition[916]: INFO : Ignition 2.18.0 Oct 8 19:46:01.325785 ignition[916]: INFO : Stage: mount Oct 8 19:46:01.327467 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:01.327467 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:46:01.327467 ignition[916]: INFO : mount: mount passed Oct 8 19:46:01.327467 ignition[916]: INFO : Ignition finished successfully Oct 8 19:46:01.329442 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:46:01.331117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:46:01.337505 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:46:01.765474 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:46:01.775581 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:46:01.781989 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Oct 8 19:46:01.782026 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:46:01.782037 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:46:01.783523 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:46:01.785426 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:46:01.786609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:46:01.802343 ignition[948]: INFO : Ignition 2.18.0 Oct 8 19:46:01.802343 ignition[948]: INFO : Stage: files Oct 8 19:46:01.803861 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:01.803861 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:46:01.803861 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:46:01.807100 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:46:01.807100 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:46:01.807100 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:46:01.807100 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:46:01.807100 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:46:01.807039 unknown[948]: wrote ssh authorized keys file for user: core Oct 8 19:46:01.814591 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:46:01.814591 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:46:01.814591 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:46:01.814591 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:46:02.061857 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:46:02.245532 systemd-networkd[765]: eth0: Gained IPv6LL Oct 8 19:46:03.134558 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:46:03.134558 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:46:03.138353 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 8 19:46:03.441372 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 8 19:46:03.510516 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:46:03.512395 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 19:46:03.746730 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 8 19:46:03.941263 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:46:03.941263 ignition[948]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 8 19:46:03.944655 ignition[948]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:46:03.971009 ignition[948]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:46:03.976046 ignition[948]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:46:03.977517 ignition[948]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:46:03.977517 ignition[948]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:46:03.977517 ignition[948]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:46:03.977517 ignition[948]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:46:03.977517 ignition[948]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:46:03.977517 ignition[948]: INFO : files: files passed Oct 8 19:46:03.977517 ignition[948]: INFO : Ignition finished successfully Oct 8 19:46:03.981682 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:46:03.994557 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:46:03.996768 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:46:03.998126 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:46:03.998200 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:46:04.004239 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:46:04.007263 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:46:04.007263 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:46:04.010872 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:46:04.010230 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:46:04.011886 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:46:04.028540 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:46:04.049088 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:46:04.049180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:46:04.051364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:46:04.053099 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:46:04.054795 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:46:04.062602 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:46:04.077432 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:46:04.086523 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:46:04.094052 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:46:04.095212 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:46:04.097306 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:46:04.099429 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:46:04.099532 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:46:04.102043 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:46:04.104112 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:46:04.105761 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:46:04.107495 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:46:04.109469 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:46:04.111407 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:46:04.113310 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:46:04.115346 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:46:04.117400 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:46:04.119203 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:46:04.120805 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:46:04.120911 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:46:04.123245 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:46:04.124368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:46:04.126264 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:46:04.129475 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:46:04.130628 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:46:04.130731 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:46:04.133489 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:46:04.133588 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:46:04.135530 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:46:04.137140 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:46:04.140475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:46:04.141665 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:46:04.143764 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:46:04.145340 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:46:04.145434 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:46:04.147047 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:46:04.147121 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:46:04.148678 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:46:04.148780 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:46:04.150569 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:46:04.150666 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:46:04.162629 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:46:04.163498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:46:04.163621 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:46:04.168201 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:46:04.169062 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:46:04.169186 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:46:04.173497 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:46:04.178996 ignition[1004]: INFO : Ignition 2.18.0 Oct 8 19:46:04.178996 ignition[1004]: INFO : Stage: umount Oct 8 19:46:04.178996 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:04.178996 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:46:04.178996 ignition[1004]: INFO : umount: umount passed Oct 8 19:46:04.178996 ignition[1004]: INFO : Ignition finished successfully Oct 8 19:46:04.173611 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:46:04.177062 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:46:04.177159 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:46:04.179678 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:46:04.180727 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:46:04.180805 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:46:04.184240 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:46:04.184340 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:46:04.185801 systemd[1]: Stopped target network.target - Network. Oct 8 19:46:04.187328 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:46:04.187399 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:46:04.190743 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:46:04.190792 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:46:04.192398 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:46:04.192459 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:46:04.194219 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:46:04.194262 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:46:04.195933 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:46:04.195979 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:46:04.198004 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:46:04.199968 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:46:04.208086 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:46:04.208195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:46:04.208451 systemd-networkd[765]: eth0: DHCPv6 lease lost Oct 8 19:46:04.210395 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:46:04.212546 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:46:04.214963 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:46:04.215019 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:46:04.224508 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:46:04.225332 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:46:04.225388 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:46:04.227290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:46:04.227346 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:46:04.229104 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:46:04.229145 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:46:04.231120 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:46:04.231163 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:46:04.233027 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:46:04.243821 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:46:04.243922 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:46:04.253118 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:46:04.253257 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:46:04.255596 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:46:04.255636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:46:04.257395 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:46:04.257433 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:46:04.259171 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:46:04.259218 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:46:04.261878 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:46:04.261920 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:46:04.264422 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:46:04.264467 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:46:04.276560 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:46:04.277525 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:46:04.277578 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:46:04.279579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:46:04.279620 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:46:04.281719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:46:04.281789 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:46:04.284837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:46:04.286886 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:46:04.295790 systemd[1]: Switching root. Oct 8 19:46:04.330326 systemd-journald[238]: Journal stopped Oct 8 19:46:05.071661 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 8 19:46:05.071716 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:46:05.071732 kernel: SELinux: policy capability open_perms=1 Oct 8 19:46:05.071742 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:46:05.071751 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:46:05.071763 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:46:05.071773 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:46:05.071782 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:46:05.071792 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:46:05.071802 kernel: audit: type=1403 audit(1728416764.530:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:46:05.071812 systemd[1]: Successfully loaded SELinux policy in 34.102ms. Oct 8 19:46:05.071831 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.042ms. Oct 8 19:46:05.071843 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:46:05.071853 systemd[1]: Detected virtualization kvm. Oct 8 19:46:05.071865 systemd[1]: Detected architecture arm64. Oct 8 19:46:05.071875 systemd[1]: Detected first boot. Oct 8 19:46:05.071885 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:46:05.071897 zram_generator::config[1064]: No configuration found. Oct 8 19:46:05.071909 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:46:05.071919 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:46:05.071929 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:46:05.071940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:46:05.071952 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:46:05.071962 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:46:05.071973 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:46:05.071984 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:46:05.071994 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:46:05.072004 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:46:05.072015 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:46:05.072025 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:46:05.072036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:46:05.072048 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:46:05.072058 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:46:05.072069 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:46:05.072079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:46:05.072090 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:46:05.072100 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:46:05.072110 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:46:05.072121 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:46:05.072132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:46:05.072144 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:46:05.072154 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:46:05.072164 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:46:05.072175 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:46:05.072185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:46:05.072195 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:46:05.072206 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:46:05.072216 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:46:05.072250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:46:05.072281 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:46:05.072295 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:46:05.072306 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:46:05.072343 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:46:05.072357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:46:05.072367 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:46:05.072377 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:46:05.072388 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:46:05.072515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:46:05.072533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:46:05.072544 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:46:05.072554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:46:05.072564 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:46:05.072574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:46:05.072584 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:46:05.072595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:46:05.072609 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:46:05.072620 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 8 19:46:05.072633 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 8 19:46:05.072643 kernel: loop: module loaded Oct 8 19:46:05.072653 kernel: ACPI: bus type drm_connector registered Oct 8 19:46:05.072663 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:46:05.072673 kernel: fuse: init (API version 7.39) Oct 8 19:46:05.072682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:46:05.072693 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:46:05.072705 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:46:05.072715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:46:05.072746 systemd-journald[1149]: Collecting audit messages is disabled. Oct 8 19:46:05.072771 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:46:05.072783 systemd-journald[1149]: Journal started Oct 8 19:46:05.072804 systemd-journald[1149]: Runtime Journal (/run/log/journal/2f45fc1b0a1447edbc5d723e3d1fc3f4) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:46:05.075888 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:46:05.076956 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:46:05.078135 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:46:05.079188 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:46:05.080323 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:46:05.081468 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:46:05.082661 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:46:05.084053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:46:05.085475 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:46:05.085634 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:46:05.087015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:46:05.087170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:46:05.088497 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:46:05.088716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:46:05.089927 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:46:05.090083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:46:05.091472 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:46:05.091623 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:46:05.092849 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:46:05.093062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:46:05.094596 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:46:05.095981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:46:05.097873 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:46:05.109331 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:46:05.119514 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:46:05.121528 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:46:05.122606 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:46:05.125591 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:46:05.127755 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:46:05.128970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:46:05.132150 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:46:05.133299 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:46:05.137480 systemd-journald[1149]: Time spent on flushing to /var/log/journal/2f45fc1b0a1447edbc5d723e3d1fc3f4 is 11.369ms for 845 entries. Oct 8 19:46:05.137480 systemd-journald[1149]: System Journal (/var/log/journal/2f45fc1b0a1447edbc5d723e3d1fc3f4) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:46:05.156004 systemd-journald[1149]: Received client request to flush runtime journal. Oct 8 19:46:05.135571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:46:05.137537 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:46:05.141536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:46:05.142840 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:46:05.144057 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:46:05.145527 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:46:05.148184 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:46:05.151165 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:46:05.157983 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:46:05.167341 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:46:05.168897 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:46:05.176311 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Oct 8 19:46:05.176337 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Oct 8 19:46:05.182227 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:46:05.188562 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:46:05.206547 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:46:05.214568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:46:05.226729 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Oct 8 19:46:05.226750 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Oct 8 19:46:05.230274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:46:05.556176 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:46:05.564565 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:46:05.584660 systemd-udevd[1229]: Using default interface naming scheme 'v255'. Oct 8 19:46:05.597640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:46:05.611200 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:46:05.620206 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:46:05.628631 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Oct 8 19:46:05.641472 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1245) Oct 8 19:46:05.659436 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1235) Oct 8 19:46:05.674356 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:46:05.678905 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:46:05.737674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:46:05.741349 systemd-networkd[1237]: lo: Link UP Oct 8 19:46:05.741355 systemd-networkd[1237]: lo: Gained carrier Oct 8 19:46:05.744004 systemd-networkd[1237]: Enumeration completed Oct 8 19:46:05.744056 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:46:05.744623 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:05.744629 systemd-networkd[1237]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:46:05.745368 systemd-networkd[1237]: eth0: Link UP Oct 8 19:46:05.745438 systemd-networkd[1237]: eth0: Gained carrier Oct 8 19:46:05.745509 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:05.745575 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:46:05.748569 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:46:05.750866 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:46:05.763568 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:46:05.768500 systemd-networkd[1237]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:46:05.776608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:46:05.792800 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:46:05.794186 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:46:05.806565 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:46:05.809892 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:46:05.835769 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:46:05.837091 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:46:05.838266 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:46:05.838298 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:46:05.839293 systemd[1]: Reached target machines.target - Containers. Oct 8 19:46:05.841283 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:46:05.852558 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:46:05.854757 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:46:05.855805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:46:05.856687 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:46:05.858800 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:46:05.863546 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:46:05.865455 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:46:05.869433 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:46:05.876640 kernel: loop0: detected capacity change from 0 to 113672 Oct 8 19:46:05.876727 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:46:05.879780 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:46:05.880480 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:46:05.888433 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:46:05.935453 kernel: loop1: detected capacity change from 0 to 59688 Oct 8 19:46:05.967426 kernel: loop2: detected capacity change from 0 to 194512 Oct 8 19:46:06.003442 kernel: loop3: detected capacity change from 0 to 113672 Oct 8 19:46:06.011432 kernel: loop4: detected capacity change from 0 to 59688 Oct 8 19:46:06.016433 kernel: loop5: detected capacity change from 0 to 194512 Oct 8 19:46:06.020235 (sd-merge)[1295]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:46:06.020644 (sd-merge)[1295]: Merged extensions into '/usr'. Oct 8 19:46:06.024901 systemd[1]: Reloading requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:46:06.024914 systemd[1]: Reloading... Oct 8 19:46:06.064548 zram_generator::config[1321]: No configuration found. Oct 8 19:46:06.093300 ldconfig[1279]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:46:06.163611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:46:06.212303 systemd[1]: Reloading finished in 187 ms. Oct 8 19:46:06.231112 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:46:06.232545 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:46:06.251554 systemd[1]: Starting ensure-sysext.service... Oct 8 19:46:06.253259 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:46:06.256538 systemd[1]: Reloading requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:46:06.256553 systemd[1]: Reloading... Oct 8 19:46:06.268401 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:46:06.268665 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:46:06.269260 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:46:06.269509 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Oct 8 19:46:06.269560 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Oct 8 19:46:06.271809 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:46:06.271821 systemd-tmpfiles[1367]: Skipping /boot Oct 8 19:46:06.278026 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:46:06.278040 systemd-tmpfiles[1367]: Skipping /boot Oct 8 19:46:06.298434 zram_generator::config[1393]: No configuration found. Oct 8 19:46:06.388889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:46:06.438106 systemd[1]: Reloading finished in 181 ms. Oct 8 19:46:06.453164 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:46:06.468090 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:46:06.470454 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:46:06.472747 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:46:06.476675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:46:06.480634 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:46:06.489473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:46:06.490723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:46:06.501094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:46:06.504113 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:46:06.505359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:46:06.506217 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:46:06.508042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:46:06.508400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:46:06.510132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:46:06.510374 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:46:06.512013 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:46:06.512255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:46:06.520128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:46:06.532704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:46:06.534966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:46:06.537354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:46:06.538380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:46:06.542735 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:46:06.544803 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:46:06.546657 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:46:06.548322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:46:06.548661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:46:06.553200 augenrules[1475]: No rules Oct 8 19:46:06.550193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:46:06.550368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:46:06.552529 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:46:06.553978 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:46:06.554148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:46:06.558669 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:46:06.565893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:46:06.567771 systemd-resolved[1440]: Positive Trust Anchors: Oct 8 19:46:06.567789 systemd-resolved[1440]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:46:06.567820 systemd-resolved[1440]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:46:06.573283 systemd-resolved[1440]: Defaulting to hostname 'linux'. Oct 8 19:46:06.573653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:46:06.575645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:46:06.580239 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:46:06.582307 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:46:06.583528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:46:06.583660 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:46:06.584671 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:46:06.586216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:46:06.586372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:46:06.587854 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:46:06.587991 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:46:06.589424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:46:06.589560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:46:06.591090 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:46:06.591264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:46:06.594579 systemd[1]: Finished ensure-sysext.service. Oct 8 19:46:06.598665 systemd[1]: Reached target network.target - Network. Oct 8 19:46:06.599557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:46:06.600642 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:46:06.600714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:46:06.613590 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:46:06.656712 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:46:06.657404 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:46:06.657515 systemd-timesyncd[1508]: Initial clock synchronization to Tue 2024-10-08 19:46:06.798533 UTC. Oct 8 19:46:06.658194 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:46:06.659291 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:46:06.660503 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:46:06.661738 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:46:06.662897 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:46:06.662932 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:46:06.663780 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:46:06.664837 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:46:06.665953 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:46:06.667152 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:46:06.668784 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:46:06.671113 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:46:06.673267 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:46:06.681306 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:46:06.682377 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:46:06.683282 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:46:06.684339 systemd[1]: System is tainted: cgroupsv1 Oct 8 19:46:06.684381 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:46:06.684401 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:46:06.685480 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:46:06.687398 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:46:06.689422 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:46:06.692431 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:46:06.693642 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:46:06.697578 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:46:06.701853 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:46:06.704674 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:46:06.705721 jq[1520]: false Oct 8 19:46:06.709830 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:46:06.714631 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:46:06.719476 extend-filesystems[1522]: Found loop3 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found loop4 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found loop5 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found vda Oct 8 19:46:06.721570 extend-filesystems[1522]: Found vda1 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found vda2 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found vda3 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found usr Oct 8 19:46:06.721570 extend-filesystems[1522]: Found vda4 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found vda6 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found vda7 Oct 8 19:46:06.721570 extend-filesystems[1522]: Found vda9 Oct 8 19:46:06.721570 extend-filesystems[1522]: Checking size of /dev/vda9 Oct 8 19:46:06.720043 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:46:06.737652 extend-filesystems[1522]: Resized partition /dev/vda9 Oct 8 19:46:06.727055 dbus-daemon[1519]: [system] SELinux support is enabled Oct 8 19:46:06.727997 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:46:06.732511 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:46:06.735978 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:46:06.741813 jq[1544]: true Oct 8 19:46:06.741837 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:46:06.742058 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:46:06.750465 extend-filesystems[1547]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:46:06.742426 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:46:06.742619 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:46:06.746497 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:46:06.746713 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:46:06.762480 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:46:06.763085 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:46:06.768955 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:46:06.771165 jq[1551]: true Oct 8 19:46:06.768991 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:46:06.770958 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:46:06.770975 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:46:06.775156 update_engine[1538]: I1008 19:46:06.773102 1538 main.cc:92] Flatcar Update Engine starting Oct 8 19:46:06.780157 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:46:06.783403 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1242) Oct 8 19:46:06.785591 update_engine[1538]: I1008 19:46:06.781334 1538 update_check_scheduler.cc:74] Next update check in 9m35s Oct 8 19:46:06.785770 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:46:06.786916 tar[1550]: linux-arm64/helm Oct 8 19:46:06.787399 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:46:06.796519 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:46:06.816215 extend-filesystems[1547]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:46:06.816215 extend-filesystems[1547]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:46:06.816215 extend-filesystems[1547]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:46:06.821698 extend-filesystems[1522]: Resized filesystem in /dev/vda9 Oct 8 19:46:06.816372 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:46:06.816664 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:46:06.828122 systemd-logind[1533]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:46:06.828949 systemd-logind[1533]: New seat seat0. Oct 8 19:46:06.830517 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:46:06.859279 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:46:06.866851 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:46:06.868759 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:46:06.873337 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:46:06.986867 containerd[1556]: time="2024-10-08T19:46:06.986737840Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:46:07.019770 containerd[1556]: time="2024-10-08T19:46:07.019326337Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:46:07.019770 containerd[1556]: time="2024-10-08T19:46:07.019609447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:07.021477 containerd[1556]: time="2024-10-08T19:46:07.021442400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:07.021652 containerd[1556]: time="2024-10-08T19:46:07.021538222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:07.021887 containerd[1556]: time="2024-10-08T19:46:07.021863015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:07.022481 containerd[1556]: time="2024-10-08T19:46:07.021934699Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:46:07.022481 containerd[1556]: time="2024-10-08T19:46:07.022019286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:07.022481 containerd[1556]: time="2024-10-08T19:46:07.022064388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:07.022481 containerd[1556]: time="2024-10-08T19:46:07.022076600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:07.022481 containerd[1556]: time="2024-10-08T19:46:07.022131268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:07.022481 containerd[1556]: time="2024-10-08T19:46:07.022307077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:07.022481 containerd[1556]: time="2024-10-08T19:46:07.022411448Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:46:07.022481 containerd[1556]: time="2024-10-08T19:46:07.022422153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:07.022820 containerd[1556]: time="2024-10-08T19:46:07.022796771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:07.022876 containerd[1556]: time="2024-10-08T19:46:07.022863855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:46:07.022997 containerd[1556]: time="2024-10-08T19:46:07.022978239Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:46:07.023052 containerd[1556]: time="2024-10-08T19:46:07.023040030Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:46:07.026330 containerd[1556]: time="2024-10-08T19:46:07.026304451Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:46:07.026418 containerd[1556]: time="2024-10-08T19:46:07.026405036Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:46:07.026499 containerd[1556]: time="2024-10-08T19:46:07.026486285Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:46:07.026567 containerd[1556]: time="2024-10-08T19:46:07.026554956Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:46:07.026659 containerd[1556]: time="2024-10-08T19:46:07.026647806Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:46:07.026733 containerd[1556]: time="2024-10-08T19:46:07.026719368Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:46:07.026797 containerd[1556]: time="2024-10-08T19:46:07.026782951Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:46:07.026966 containerd[1556]: time="2024-10-08T19:46:07.026945571Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027018109Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027036183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027050064Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027064148Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027080471Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027093700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027106034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027120119Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027135383Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027154759Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027167256Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:46:07.027388 containerd[1556]: time="2024-10-08T19:46:07.027262508Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:46:07.027935 containerd[1556]: time="2024-10-08T19:46:07.027914416Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:46:07.028030 containerd[1556]: time="2024-10-08T19:46:07.028014267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.028098 containerd[1556]: time="2024-10-08T19:46:07.028082735Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:46:07.028161 containerd[1556]: time="2024-10-08T19:46:07.028149819Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:46:07.028318 containerd[1556]: time="2024-10-08T19:46:07.028304461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.028380 containerd[1556]: time="2024-10-08T19:46:07.028361694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.028457 containerd[1556]: time="2024-10-08T19:46:07.028439849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028512998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028531601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028546825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028563270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028577476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028590869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028712905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028729717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028741522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028753367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028765335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028778442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028790084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.029710 containerd[1556]: time="2024-10-08T19:46:07.028809989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:46:07.030000 containerd[1556]: time="2024-10-08T19:46:07.029127822Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:46:07.030000 containerd[1556]: time="2024-10-08T19:46:07.029184648Z" level=info msg="Connect containerd service" Oct 8 19:46:07.030000 containerd[1556]: time="2024-10-08T19:46:07.029210455Z" level=info msg="using legacy CRI server" Oct 8 19:46:07.030000 containerd[1556]: time="2024-10-08T19:46:07.029217579Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:46:07.030000 containerd[1556]: time="2024-10-08T19:46:07.029374053Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:46:07.030517 containerd[1556]: time="2024-10-08T19:46:07.030491881Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:46:07.030610 containerd[1556]: time="2024-10-08T19:46:07.030596740Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:46:07.030791 containerd[1556]: time="2024-10-08T19:46:07.030774259Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:46:07.030881 containerd[1556]: time="2024-10-08T19:46:07.030862632Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:46:07.030941 containerd[1556]: time="2024-10-08T19:46:07.030925685Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:46:07.031282 containerd[1556]: time="2024-10-08T19:46:07.030741531Z" level=info msg="Start subscribing containerd event" Oct 8 19:46:07.031282 containerd[1556]: time="2024-10-08T19:46:07.031263708Z" level=info msg="Start recovering state" Oct 8 19:46:07.031349 containerd[1556]: time="2024-10-08T19:46:07.031331036Z" level=info msg="Start event monitor" Oct 8 19:46:07.031349 containerd[1556]: time="2024-10-08T19:46:07.031344103Z" level=info msg="Start snapshots syncer" Oct 8 19:46:07.031384 containerd[1556]: time="2024-10-08T19:46:07.031353506Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:46:07.031384 containerd[1556]: time="2024-10-08T19:46:07.031361240Z" level=info msg="Start streaming server" Oct 8 19:46:07.031739 containerd[1556]: time="2024-10-08T19:46:07.031719616Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:46:07.031865 containerd[1556]: time="2024-10-08T19:46:07.031843403Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:46:07.032031 containerd[1556]: time="2024-10-08T19:46:07.032017055Z" level=info msg="containerd successfully booted in 0.047544s" Oct 8 19:46:07.032131 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:46:07.107570 sshd_keygen[1542]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:46:07.127221 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:46:07.140028 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:46:07.145165 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:46:07.145467 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:46:07.148790 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:46:07.152321 tar[1550]: linux-arm64/LICENSE Oct 8 19:46:07.152399 tar[1550]: linux-arm64/README.md Oct 8 19:46:07.161508 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:46:07.163299 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:46:07.184791 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:46:07.187159 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:46:07.188519 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:46:07.685680 systemd-networkd[1237]: eth0: Gained IPv6LL Oct 8 19:46:07.688228 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:46:07.690005 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:46:07.702771 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:46:07.705262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:07.707520 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:46:07.722403 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:46:07.722903 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:46:07.724852 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:46:07.725878 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:46:08.178856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:08.180532 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:46:08.181939 systemd[1]: Startup finished in 6.373s (kernel) + 3.691s (userspace) = 10.064s. Oct 8 19:46:08.183053 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:08.687654 kubelet[1661]: E1008 19:46:08.687560 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:08.690194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:08.690380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:11.526648 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:46:11.537668 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:48022.service - OpenSSH per-connection server daemon (10.0.0.1:48022). Oct 8 19:46:11.591953 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 48022 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:46:11.598048 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:46:11.608696 systemd-logind[1533]: New session 1 of user core. Oct 8 19:46:11.609655 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:46:11.619662 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:46:11.632868 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:46:11.634892 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:46:11.641977 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:46:11.716479 systemd[1681]: Queued start job for default target default.target. Oct 8 19:46:11.717080 systemd[1681]: Created slice app.slice - User Application Slice. Oct 8 19:46:11.717121 systemd[1681]: Reached target paths.target - Paths. Oct 8 19:46:11.717131 systemd[1681]: Reached target timers.target - Timers. Oct 8 19:46:11.729561 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:46:11.735196 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:46:11.735253 systemd[1681]: Reached target sockets.target - Sockets. Oct 8 19:46:11.735264 systemd[1681]: Reached target basic.target - Basic System. Oct 8 19:46:11.735296 systemd[1681]: Reached target default.target - Main User Target. Oct 8 19:46:11.735319 systemd[1681]: Startup finished in 88ms. Oct 8 19:46:11.735913 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:46:11.737383 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:46:11.801766 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:48038.service - OpenSSH per-connection server daemon (10.0.0.1:48038). Oct 8 19:46:11.832502 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 48038 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:46:11.833768 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:46:11.838172 systemd-logind[1533]: New session 2 of user core. Oct 8 19:46:11.848670 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:46:11.901646 sshd[1693]: pam_unix(sshd:session): session closed for user core Oct 8 19:46:11.924761 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:48050.service - OpenSSH per-connection server daemon (10.0.0.1:48050). Oct 8 19:46:11.925187 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:48038.service: Deactivated successfully. Oct 8 19:46:11.926776 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:46:11.928109 systemd-logind[1533]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:46:11.929309 systemd-logind[1533]: Removed session 2. Oct 8 19:46:11.952816 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 48050 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:46:11.953967 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:46:11.959518 systemd-logind[1533]: New session 3 of user core. Oct 8 19:46:11.973251 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:46:12.025294 sshd[1699]: pam_unix(sshd:session): session closed for user core Oct 8 19:46:12.033690 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:48064.service - OpenSSH per-connection server daemon (10.0.0.1:48064). Oct 8 19:46:12.034090 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:48050.service: Deactivated successfully. Oct 8 19:46:12.035647 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:46:12.036351 systemd-logind[1533]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:46:12.037511 systemd-logind[1533]: Removed session 3. Oct 8 19:46:12.062190 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 48064 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:46:12.063268 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:46:12.067477 systemd-logind[1533]: New session 4 of user core. Oct 8 19:46:12.073683 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:46:12.127228 sshd[1706]: pam_unix(sshd:session): session closed for user core Oct 8 19:46:12.136839 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:48072.service - OpenSSH per-connection server daemon (10.0.0.1:48072). Oct 8 19:46:12.137785 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:48064.service: Deactivated successfully. Oct 8 19:46:12.139695 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:46:12.140397 systemd-logind[1533]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:46:12.141663 systemd-logind[1533]: Removed session 4. Oct 8 19:46:12.165075 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 48072 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:46:12.166378 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:46:12.170482 systemd-logind[1533]: New session 5 of user core. Oct 8 19:46:12.183673 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:46:12.243810 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:46:12.244044 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:46:12.256185 sudo[1721]: pam_unix(sudo:session): session closed for user root Oct 8 19:46:12.257907 sshd[1714]: pam_unix(sshd:session): session closed for user core Oct 8 19:46:12.272897 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:48084.service - OpenSSH per-connection server daemon (10.0.0.1:48084). Oct 8 19:46:12.274074 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:48072.service: Deactivated successfully. Oct 8 19:46:12.275936 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:46:12.276733 systemd-logind[1533]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:46:12.278599 systemd-logind[1533]: Removed session 5. Oct 8 19:46:12.301714 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 48084 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:46:12.302937 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:46:12.307622 systemd-logind[1533]: New session 6 of user core. Oct 8 19:46:12.318692 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:46:12.370496 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:46:12.371035 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:46:12.374043 sudo[1731]: pam_unix(sudo:session): session closed for user root Oct 8 19:46:12.378502 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:46:12.378737 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:46:12.395659 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:46:12.397025 auditctl[1734]: No rules Oct 8 19:46:12.397872 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:46:12.398113 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:46:12.399932 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:46:12.423436 augenrules[1753]: No rules Oct 8 19:46:12.424804 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:46:12.426122 sudo[1730]: pam_unix(sudo:session): session closed for user root Oct 8 19:46:12.427934 sshd[1723]: pam_unix(sshd:session): session closed for user core Oct 8 19:46:12.442655 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:44436.service - OpenSSH per-connection server daemon (10.0.0.1:44436). Oct 8 19:46:12.443018 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:48084.service: Deactivated successfully. Oct 8 19:46:12.444706 systemd-logind[1533]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:46:12.445292 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:46:12.446805 systemd-logind[1533]: Removed session 6. Oct 8 19:46:12.472924 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 44436 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:46:12.474039 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:46:12.478206 systemd-logind[1533]: New session 7 of user core. Oct 8 19:46:12.493645 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:46:12.544053 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:46:12.544298 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:46:12.648801 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:46:12.649094 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:46:12.881441 dockerd[1777]: time="2024-10-08T19:46:12.881373662Z" level=info msg="Starting up" Oct 8 19:46:13.066103 dockerd[1777]: time="2024-10-08T19:46:13.065985373Z" level=info msg="Loading containers: start." Oct 8 19:46:13.146477 kernel: Initializing XFRM netlink socket Oct 8 19:46:13.216325 systemd-networkd[1237]: docker0: Link UP Oct 8 19:46:13.233646 dockerd[1777]: time="2024-10-08T19:46:13.233605044Z" level=info msg="Loading containers: done." Oct 8 19:46:13.287189 dockerd[1777]: time="2024-10-08T19:46:13.286708526Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:46:13.287189 dockerd[1777]: time="2024-10-08T19:46:13.286889549Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:46:13.287189 dockerd[1777]: time="2024-10-08T19:46:13.287000904Z" level=info msg="Daemon has completed initialization" Oct 8 19:46:13.314181 dockerd[1777]: time="2024-10-08T19:46:13.314124197Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:46:13.314979 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:46:13.924005 containerd[1556]: time="2024-10-08T19:46:13.923963978Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:46:14.537322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596358591.mount: Deactivated successfully. Oct 8 19:46:16.375169 containerd[1556]: time="2024-10-08T19:46:16.374941932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:16.379599 containerd[1556]: time="2024-10-08T19:46:16.379560031Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286060" Oct 8 19:46:16.380942 containerd[1556]: time="2024-10-08T19:46:16.380858364Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:16.384340 containerd[1556]: time="2024-10-08T19:46:16.384289758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:16.385065 containerd[1556]: time="2024-10-08T19:46:16.385017399Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 2.461008047s" Oct 8 19:46:16.385065 containerd[1556]: time="2024-10-08T19:46:16.385061432Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 19:46:16.403456 containerd[1556]: time="2024-10-08T19:46:16.403406795Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:46:18.186118 containerd[1556]: time="2024-10-08T19:46:18.186030635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:18.186660 containerd[1556]: time="2024-10-08T19:46:18.186622510Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374206" Oct 8 19:46:18.187492 containerd[1556]: time="2024-10-08T19:46:18.187456485Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:18.191220 containerd[1556]: time="2024-10-08T19:46:18.191159389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:18.192278 containerd[1556]: time="2024-10-08T19:46:18.192241167Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.788777806s" Oct 8 19:46:18.192325 containerd[1556]: time="2024-10-08T19:46:18.192293017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 19:46:18.210348 containerd[1556]: time="2024-10-08T19:46:18.210309242Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:46:18.940648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:46:18.949589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:19.041902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:19.045706 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:19.084833 kubelet[2002]: E1008 19:46:19.084778 2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:19.088145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:19.088345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:19.758709 containerd[1556]: time="2024-10-08T19:46:19.758643222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:19.759548 containerd[1556]: time="2024-10-08T19:46:19.759520690Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751219" Oct 8 19:46:19.762965 containerd[1556]: time="2024-10-08T19:46:19.762903032Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:19.765669 containerd[1556]: time="2024-10-08T19:46:19.765638282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:19.766806 containerd[1556]: time="2024-10-08T19:46:19.766779846Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.556431572s" Oct 8 19:46:19.766853 containerd[1556]: time="2024-10-08T19:46:19.766812402Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 19:46:19.785036 containerd[1556]: time="2024-10-08T19:46:19.784964628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:46:20.795726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224453182.mount: Deactivated successfully. Oct 8 19:46:21.563667 containerd[1556]: time="2024-10-08T19:46:21.563620553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:21.564351 containerd[1556]: time="2024-10-08T19:46:21.564078436Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254040" Oct 8 19:46:21.564905 containerd[1556]: time="2024-10-08T19:46:21.564874196Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:21.567263 containerd[1556]: time="2024-10-08T19:46:21.567236849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:21.567946 containerd[1556]: time="2024-10-08T19:46:21.567914128Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.782913259s" Oct 8 19:46:21.568056 containerd[1556]: time="2024-10-08T19:46:21.567947779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 19:46:21.586000 containerd[1556]: time="2024-10-08T19:46:21.585962160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:46:22.213398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820843733.mount: Deactivated successfully. Oct 8 19:46:22.773865 containerd[1556]: time="2024-10-08T19:46:22.773819150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:22.774345 containerd[1556]: time="2024-10-08T19:46:22.774288946Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 19:46:22.775185 containerd[1556]: time="2024-10-08T19:46:22.775158613Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:22.781149 containerd[1556]: time="2024-10-08T19:46:22.779395480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:22.781149 containerd[1556]: time="2024-10-08T19:46:22.780615499Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.194619209s" Oct 8 19:46:22.781149 containerd[1556]: time="2024-10-08T19:46:22.780647454Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:46:22.800900 containerd[1556]: time="2024-10-08T19:46:22.800863890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:46:23.232765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903113164.mount: Deactivated successfully. Oct 8 19:46:23.236398 containerd[1556]: time="2024-10-08T19:46:23.236057921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:23.237247 containerd[1556]: time="2024-10-08T19:46:23.237213765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 8 19:46:23.238287 containerd[1556]: time="2024-10-08T19:46:23.238217893Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:23.240409 containerd[1556]: time="2024-10-08T19:46:23.240354978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:23.241370 containerd[1556]: time="2024-10-08T19:46:23.241296135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 440.392032ms" Oct 8 19:46:23.241370 containerd[1556]: time="2024-10-08T19:46:23.241325636Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:46:23.259662 containerd[1556]: time="2024-10-08T19:46:23.259618760Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:46:23.784305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1804052197.mount: Deactivated successfully. Oct 8 19:46:25.846439 containerd[1556]: time="2024-10-08T19:46:25.846212533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:25.846778 containerd[1556]: time="2024-10-08T19:46:25.846672025Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Oct 8 19:46:25.847671 containerd[1556]: time="2024-10-08T19:46:25.847632234Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:25.850699 containerd[1556]: time="2024-10-08T19:46:25.850656732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:25.852030 containerd[1556]: time="2024-10-08T19:46:25.851979079Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.592325451s" Oct 8 19:46:25.852030 containerd[1556]: time="2024-10-08T19:46:25.852024151Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 19:46:29.338727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:46:29.351843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:29.520603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:29.523145 (kubelet)[2227]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:29.564073 kubelet[2227]: E1008 19:46:29.564016 2227 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:29.566894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:29.567083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:31.415083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:31.428638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:31.450257 systemd[1]: Reloading requested from client PID 2244 ('systemctl') (unit session-7.scope)... Oct 8 19:46:31.450273 systemd[1]: Reloading... Oct 8 19:46:31.513443 zram_generator::config[2288]: No configuration found. Oct 8 19:46:31.624202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:46:31.683121 systemd[1]: Reloading finished in 232 ms. Oct 8 19:46:31.723913 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:46:31.723976 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:46:31.724243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:31.726616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:31.814147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:31.818743 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:46:31.858499 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:46:31.858499 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:46:31.858499 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:46:31.858871 kubelet[2339]: I1008 19:46:31.858548 2339 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:46:32.430391 kubelet[2339]: I1008 19:46:32.430349 2339 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:46:32.430391 kubelet[2339]: I1008 19:46:32.430383 2339 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:46:32.430675 kubelet[2339]: I1008 19:46:32.430623 2339 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:46:32.461732 kubelet[2339]: I1008 19:46:32.461557 2339 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:46:32.463539 kubelet[2339]: E1008 19:46:32.463519 2339 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.472302 kubelet[2339]: I1008 19:46:32.472280 2339 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:46:32.473393 kubelet[2339]: I1008 19:46:32.473357 2339 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:46:32.473608 kubelet[2339]: I1008 19:46:32.473576 2339 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:46:32.473608 kubelet[2339]: I1008 19:46:32.473603 2339 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:46:32.473608 kubelet[2339]: I1008 19:46:32.473612 2339 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:46:32.474739 kubelet[2339]: I1008 19:46:32.474698 2339 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:46:32.476802 kubelet[2339]: I1008 19:46:32.476771 2339 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:46:32.476802 kubelet[2339]: I1008 19:46:32.476800 2339 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:46:32.476853 kubelet[2339]: I1008 19:46:32.476820 2339 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:46:32.476853 kubelet[2339]: I1008 19:46:32.476835 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:46:32.477407 kubelet[2339]: W1008 19:46:32.477230 2339 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.477407 kubelet[2339]: E1008 19:46:32.477284 2339 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.477516 kubelet[2339]: W1008 19:46:32.477475 2339 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.477516 kubelet[2339]: E1008 19:46:32.477505 2339 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.479123 kubelet[2339]: I1008 19:46:32.479090 2339 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:46:32.479817 kubelet[2339]: I1008 19:46:32.479802 2339 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:46:32.480443 kubelet[2339]: W1008 19:46:32.480409 2339 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:46:32.481231 kubelet[2339]: I1008 19:46:32.481211 2339 server.go:1256] "Started kubelet" Oct 8 19:46:32.483561 kubelet[2339]: I1008 19:46:32.483015 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:46:32.483561 kubelet[2339]: I1008 19:46:32.483063 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:46:32.483561 kubelet[2339]: I1008 19:46:32.483238 2339 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:46:32.483561 kubelet[2339]: I1008 19:46:32.483024 2339 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:46:32.487437 kubelet[2339]: I1008 19:46:32.485106 2339 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:46:32.487722 kubelet[2339]: I1008 19:46:32.487701 2339 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:46:32.488279 kubelet[2339]: I1008 19:46:32.488128 2339 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:46:32.488279 kubelet[2339]: E1008 19:46:32.488146 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Oct 8 19:46:32.488279 kubelet[2339]: I1008 19:46:32.488203 2339 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:46:32.488755 kubelet[2339]: W1008 19:46:32.488716 2339 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.488867 kubelet[2339]: E1008 19:46:32.488845 2339 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.489399 kubelet[2339]: E1008 19:46:32.489378 2339 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:46:32.490290 kubelet[2339]: I1008 19:46:32.490266 2339 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:46:32.490564 kubelet[2339]: I1008 19:46:32.490355 2339 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:46:32.491707 kubelet[2339]: I1008 19:46:32.491669 2339 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:46:32.492085 kubelet[2339]: E1008 19:46:32.492063 2339 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc91ea4ba947a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:46:32.481187745 +0000 UTC m=+0.659336384,LastTimestamp:2024-10-08 19:46:32.481187745 +0000 UTC m=+0.659336384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:46:32.497567 kubelet[2339]: I1008 19:46:32.497535 2339 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:46:32.499362 kubelet[2339]: I1008 19:46:32.498569 2339 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:46:32.499362 kubelet[2339]: I1008 19:46:32.498592 2339 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:46:32.499362 kubelet[2339]: I1008 19:46:32.498609 2339 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:46:32.499362 kubelet[2339]: E1008 19:46:32.498667 2339 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:46:32.504179 kubelet[2339]: W1008 19:46:32.504077 2339 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.504179 kubelet[2339]: E1008 19:46:32.504127 2339 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:32.510044 kubelet[2339]: I1008 19:46:32.510027 2339 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:46:32.510044 kubelet[2339]: I1008 19:46:32.510044 2339 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:46:32.510145 kubelet[2339]: I1008 19:46:32.510061 2339 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:46:32.568837 kubelet[2339]: I1008 19:46:32.568806 2339 policy_none.go:49] "None policy: Start" Oct 8 19:46:32.570180 kubelet[2339]: I1008 19:46:32.569754 2339 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:46:32.570180 kubelet[2339]: I1008 19:46:32.569808 2339 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:46:32.574666 kubelet[2339]: I1008 19:46:32.574622 2339 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:46:32.574950 kubelet[2339]: I1008 19:46:32.574925 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:46:32.576137 kubelet[2339]: E1008 19:46:32.576112 2339 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:46:32.589268 kubelet[2339]: I1008 19:46:32.589239 2339 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:46:32.591517 kubelet[2339]: E1008 19:46:32.591497 2339 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Oct 8 19:46:32.599645 kubelet[2339]: I1008 19:46:32.599624 2339 topology_manager.go:215] "Topology Admit Handler" podUID="300b32855389052e2eba1c6093ae1a0e" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:46:32.600614 kubelet[2339]: I1008 19:46:32.600591 2339 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:46:32.601443 kubelet[2339]: I1008 19:46:32.601396 2339 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:46:32.689464 kubelet[2339]: E1008 19:46:32.689336 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Oct 8 19:46:32.789829 kubelet[2339]: I1008 19:46:32.789803 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:32.790069 kubelet[2339]: I1008 19:46:32.789961 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:46:32.790069 kubelet[2339]: I1008 19:46:32.790027 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/300b32855389052e2eba1c6093ae1a0e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"300b32855389052e2eba1c6093ae1a0e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:46:32.790069 kubelet[2339]: I1008 19:46:32.790071 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/300b32855389052e2eba1c6093ae1a0e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"300b32855389052e2eba1c6093ae1a0e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:46:32.790150 kubelet[2339]: I1008 19:46:32.790098 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:32.790150 kubelet[2339]: I1008 19:46:32.790118 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:32.790150 kubelet[2339]: I1008 19:46:32.790149 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:32.790205 kubelet[2339]: I1008 19:46:32.790170 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:32.790205 kubelet[2339]: I1008 19:46:32.790187 2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/300b32855389052e2eba1c6093ae1a0e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"300b32855389052e2eba1c6093ae1a0e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:46:32.792837 kubelet[2339]: I1008 19:46:32.792787 2339 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:46:32.793142 kubelet[2339]: E1008 19:46:32.793124 2339 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Oct 8 19:46:32.905835 kubelet[2339]: E1008 19:46:32.905795 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:32.906650 containerd[1556]: time="2024-10-08T19:46:32.906403566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 19:46:32.907248 kubelet[2339]: E1008 19:46:32.906477 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:32.907248 kubelet[2339]: E1008 19:46:32.906627 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:32.907317 containerd[1556]: time="2024-10-08T19:46:32.906800134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 19:46:32.907317 containerd[1556]: time="2024-10-08T19:46:32.906922531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:300b32855389052e2eba1c6093ae1a0e,Namespace:kube-system,Attempt:0,}" Oct 8 19:46:33.092199 kubelet[2339]: E1008 19:46:33.090928 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Oct 8 19:46:33.195479 kubelet[2339]: I1008 19:46:33.195445 2339 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:46:33.195859 kubelet[2339]: E1008 19:46:33.195830 2339 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Oct 8 19:46:33.369899 kubelet[2339]: W1008 19:46:33.369785 2339 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:33.369899 kubelet[2339]: E1008 19:46:33.369826 2339 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:33.382295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765012276.mount: Deactivated successfully. Oct 8 19:46:33.387388 containerd[1556]: time="2024-10-08T19:46:33.387326239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:46:33.388391 containerd[1556]: time="2024-10-08T19:46:33.388359405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:46:33.388726 containerd[1556]: time="2024-10-08T19:46:33.388701913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:46:33.389341 containerd[1556]: time="2024-10-08T19:46:33.389303322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 19:46:33.389924 containerd[1556]: time="2024-10-08T19:46:33.389851022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:46:33.390662 containerd[1556]: time="2024-10-08T19:46:33.390605516Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:46:33.391196 containerd[1556]: time="2024-10-08T19:46:33.391164942Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:46:33.393624 containerd[1556]: time="2024-10-08T19:46:33.393573022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:46:33.395920 containerd[1556]: time="2024-10-08T19:46:33.395890412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.377097ms" Oct 8 19:46:33.398108 containerd[1556]: time="2024-10-08T19:46:33.397981958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.991865ms" Oct 8 19:46:33.398763 containerd[1556]: time="2024-10-08T19:46:33.398736491Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.872117ms" Oct 8 19:46:33.482684 kubelet[2339]: W1008 19:46:33.477928 2339 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:33.482684 kubelet[2339]: E1008 19:46:33.477995 2339 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Oct 8 19:46:33.559620 containerd[1556]: time="2024-10-08T19:46:33.554383498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:46:33.559620 containerd[1556]: time="2024-10-08T19:46:33.559429382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:33.559620 containerd[1556]: time="2024-10-08T19:46:33.559448593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:46:33.559620 containerd[1556]: time="2024-10-08T19:46:33.559462601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:33.559878 containerd[1556]: time="2024-10-08T19:46:33.554490196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:46:33.560057 containerd[1556]: time="2024-10-08T19:46:33.554940723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:46:33.560057 containerd[1556]: time="2024-10-08T19:46:33.559813353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:33.560057 containerd[1556]: time="2024-10-08T19:46:33.559826520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:46:33.560057 containerd[1556]: time="2024-10-08T19:46:33.559835405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:33.560057 containerd[1556]: time="2024-10-08T19:46:33.559692287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:33.560057 containerd[1556]: time="2024-10-08T19:46:33.559712618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:46:33.560057 containerd[1556]: time="2024-10-08T19:46:33.559723023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:33.607576 containerd[1556]: time="2024-10-08T19:46:33.607489357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2707cdaa391502eec0644e0808deccfbed1b781dfd16a1dae09addef2b6b5d1\"" Oct 8 19:46:33.607848 containerd[1556]: time="2024-10-08T19:46:33.607779196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c717d8107ddfbd2c056a6b1fdf224babf4388717922192c4a9a6eb08ceae6a0\"" Oct 8 19:46:33.608141 containerd[1556]: time="2024-10-08T19:46:33.608119022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:300b32855389052e2eba1c6093ae1a0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e7fd3f1643804c62619e68c20c0409b652ca5dd4ae39e9fb8d18d6c54d47f2\"" Oct 8 19:46:33.608836 kubelet[2339]: E1008 19:46:33.608814 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:33.609236 kubelet[2339]: E1008 19:46:33.609017 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:33.609236 kubelet[2339]: E1008 19:46:33.609188 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:33.611604 containerd[1556]: time="2024-10-08T19:46:33.611578197Z" level=info msg="CreateContainer within sandbox \"4c717d8107ddfbd2c056a6b1fdf224babf4388717922192c4a9a6eb08ceae6a0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:46:33.611791 containerd[1556]: time="2024-10-08T19:46:33.611587402Z" level=info msg="CreateContainer within sandbox \"a1e7fd3f1643804c62619e68c20c0409b652ca5dd4ae39e9fb8d18d6c54d47f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:46:33.611869 containerd[1556]: time="2024-10-08T19:46:33.611603851Z" level=info msg="CreateContainer within sandbox \"d2707cdaa391502eec0644e0808deccfbed1b781dfd16a1dae09addef2b6b5d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:46:33.624222 containerd[1556]: time="2024-10-08T19:46:33.624036224Z" level=info msg="CreateContainer within sandbox \"4c717d8107ddfbd2c056a6b1fdf224babf4388717922192c4a9a6eb08ceae6a0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7feff0b466603b6b8a694f5da7d64a25e29d03924127f3e9c1c383a56045bb1d\"" Oct 8 19:46:33.624877 containerd[1556]: time="2024-10-08T19:46:33.624777110Z" level=info msg="StartContainer for \"7feff0b466603b6b8a694f5da7d64a25e29d03924127f3e9c1c383a56045bb1d\"" Oct 8 19:46:33.629097 containerd[1556]: time="2024-10-08T19:46:33.629050731Z" level=info msg="CreateContainer within sandbox \"d2707cdaa391502eec0644e0808deccfbed1b781dfd16a1dae09addef2b6b5d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"17b2e39ba2f962fe3058d24e4e8dc8ad15303be1690dd60c647ad36cdc64c263\"" Oct 8 19:46:33.629688 containerd[1556]: time="2024-10-08T19:46:33.629619963Z" level=info msg="StartContainer for \"17b2e39ba2f962fe3058d24e4e8dc8ad15303be1690dd60c647ad36cdc64c263\"" Oct 8 19:46:33.631028 containerd[1556]: time="2024-10-08T19:46:33.630995277Z" level=info msg="CreateContainer within sandbox \"a1e7fd3f1643804c62619e68c20c0409b652ca5dd4ae39e9fb8d18d6c54d47f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e14e237688a4759b61f037dde7e71ea125aff2923767203c210271f6be3fe0f\"" Oct 8 19:46:33.632641 containerd[1556]: time="2024-10-08T19:46:33.631368962Z" level=info msg="StartContainer for \"9e14e237688a4759b61f037dde7e71ea125aff2923767203c210271f6be3fe0f\"" Oct 8 19:46:33.693062 containerd[1556]: time="2024-10-08T19:46:33.692909803Z" level=info msg="StartContainer for \"7feff0b466603b6b8a694f5da7d64a25e29d03924127f3e9c1c383a56045bb1d\" returns successfully" Oct 8 19:46:33.699710 containerd[1556]: time="2024-10-08T19:46:33.699677271Z" level=info msg="StartContainer for \"17b2e39ba2f962fe3058d24e4e8dc8ad15303be1690dd60c647ad36cdc64c263\" returns successfully" Oct 8 19:46:33.699790 containerd[1556]: time="2024-10-08T19:46:33.699730700Z" level=info msg="StartContainer for \"9e14e237688a4759b61f037dde7e71ea125aff2923767203c210271f6be3fe0f\" returns successfully" Oct 8 19:46:33.997919 kubelet[2339]: I1008 19:46:33.997813 2339 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:46:34.513748 kubelet[2339]: E1008 19:46:34.513719 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:34.515858 kubelet[2339]: E1008 19:46:34.515837 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:34.517004 kubelet[2339]: E1008 19:46:34.516982 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:35.517755 kubelet[2339]: E1008 19:46:35.517724 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:35.553427 kubelet[2339]: E1008 19:46:35.550930 2339 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:46:35.594369 kubelet[2339]: I1008 19:46:35.594315 2339 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:46:35.603979 kubelet[2339]: E1008 19:46:35.603930 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:35.704446 kubelet[2339]: E1008 19:46:35.704400 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:35.805160 kubelet[2339]: E1008 19:46:35.804852 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:35.905438 kubelet[2339]: E1008 19:46:35.905387 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:36.006076 kubelet[2339]: E1008 19:46:36.006045 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:36.106921 kubelet[2339]: E1008 19:46:36.106875 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:36.207485 kubelet[2339]: E1008 19:46:36.207449 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:36.238644 kubelet[2339]: E1008 19:46:36.238585 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:36.308424 kubelet[2339]: E1008 19:46:36.308388 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:36.409045 kubelet[2339]: E1008 19:46:36.408938 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:36.509331 kubelet[2339]: E1008 19:46:36.509290 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:36.564474 kubelet[2339]: E1008 19:46:36.564447 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:36.609743 kubelet[2339]: E1008 19:46:36.609691 2339 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:46:37.479808 kubelet[2339]: I1008 19:46:37.479716 2339 apiserver.go:52] "Watching apiserver" Oct 8 19:46:37.489207 kubelet[2339]: I1008 19:46:37.489174 2339 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:46:38.454373 systemd[1]: Reloading requested from client PID 2611 ('systemctl') (unit session-7.scope)... Oct 8 19:46:38.454392 systemd[1]: Reloading... Oct 8 19:46:38.505441 zram_generator::config[2648]: No configuration found. Oct 8 19:46:38.603840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:46:38.668611 systemd[1]: Reloading finished in 213 ms. Oct 8 19:46:38.693400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:38.702335 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:46:38.702665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:38.713797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:38.799873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:38.804595 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:46:38.854732 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:46:38.854732 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:46:38.854732 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:46:38.855106 kubelet[2700]: I1008 19:46:38.854754 2700 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:46:38.861806 kubelet[2700]: I1008 19:46:38.861767 2700 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:46:38.861806 kubelet[2700]: I1008 19:46:38.861796 2700 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:46:38.861976 kubelet[2700]: I1008 19:46:38.861956 2700 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:46:38.863479 kubelet[2700]: I1008 19:46:38.863453 2700 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:46:38.865258 kubelet[2700]: I1008 19:46:38.865229 2700 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:46:38.873285 kubelet[2700]: I1008 19:46:38.870180 2700 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:46:38.873285 kubelet[2700]: I1008 19:46:38.870650 2700 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:46:38.873285 kubelet[2700]: I1008 19:46:38.870798 2700 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:46:38.873285 kubelet[2700]: I1008 19:46:38.870823 2700 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:46:38.873285 kubelet[2700]: I1008 19:46:38.870832 2700 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:46:38.873285 kubelet[2700]: I1008 19:46:38.870866 2700 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:46:38.873517 kubelet[2700]: I1008 19:46:38.870965 2700 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:46:38.873517 kubelet[2700]: I1008 19:46:38.870980 2700 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:46:38.873517 kubelet[2700]: I1008 19:46:38.870999 2700 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:46:38.873517 kubelet[2700]: I1008 19:46:38.871013 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:46:38.875507 kubelet[2700]: I1008 19:46:38.874751 2700 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:46:38.875507 kubelet[2700]: I1008 19:46:38.874986 2700 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:46:38.875507 kubelet[2700]: I1008 19:46:38.875401 2700 server.go:1256] "Started kubelet" Oct 8 19:46:38.876629 kubelet[2700]: I1008 19:46:38.876603 2700 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:46:38.878070 kubelet[2700]: I1008 19:46:38.877938 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:46:38.878652 kubelet[2700]: I1008 19:46:38.878630 2700 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:46:38.879817 kubelet[2700]: I1008 19:46:38.879795 2700 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:46:38.883674 kubelet[2700]: I1008 19:46:38.883542 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:46:38.887088 kubelet[2700]: I1008 19:46:38.886547 2700 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:46:38.888719 kubelet[2700]: I1008 19:46:38.888685 2700 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:46:38.889511 kubelet[2700]: I1008 19:46:38.889432 2700 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:46:38.889625 sudo[2715]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 19:46:38.889910 sudo[2715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Oct 8 19:46:38.890797 kubelet[2700]: I1008 19:46:38.890749 2700 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:46:38.891190 kubelet[2700]: I1008 19:46:38.891020 2700 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:46:38.895799 kubelet[2700]: I1008 19:46:38.895774 2700 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:46:38.903209 kubelet[2700]: I1008 19:46:38.903123 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:46:38.904552 kubelet[2700]: I1008 19:46:38.904527 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:46:38.904552 kubelet[2700]: I1008 19:46:38.904554 2700 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:46:38.904648 kubelet[2700]: I1008 19:46:38.904571 2700 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:46:38.904648 kubelet[2700]: E1008 19:46:38.904628 2700 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:46:38.943658 kubelet[2700]: I1008 19:46:38.943623 2700 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:46:38.943658 kubelet[2700]: I1008 19:46:38.943648 2700 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:46:38.943658 kubelet[2700]: I1008 19:46:38.943665 2700 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:46:38.943807 kubelet[2700]: I1008 19:46:38.943798 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:46:38.943884 kubelet[2700]: I1008 19:46:38.943817 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:46:38.943884 kubelet[2700]: I1008 19:46:38.943824 2700 policy_none.go:49] "None policy: Start" Oct 8 19:46:38.944299 kubelet[2700]: I1008 19:46:38.944285 2700 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:46:38.944340 kubelet[2700]: I1008 19:46:38.944307 2700 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:46:38.944489 kubelet[2700]: I1008 19:46:38.944474 2700 state_mem.go:75] "Updated machine memory state" Oct 8 19:46:38.945492 kubelet[2700]: I1008 19:46:38.945475 2700 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:46:38.946469 kubelet[2700]: I1008 19:46:38.945681 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:46:38.991161 kubelet[2700]: I1008 19:46:38.991058 2700 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:46:38.997043 kubelet[2700]: I1008 19:46:38.997004 2700 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:46:38.997151 kubelet[2700]: I1008 19:46:38.997098 2700 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:46:39.004763 kubelet[2700]: I1008 19:46:39.004730 2700 topology_manager.go:215] "Topology Admit Handler" podUID="300b32855389052e2eba1c6093ae1a0e" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:46:39.004858 kubelet[2700]: I1008 19:46:39.004804 2700 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:46:39.004858 kubelet[2700]: I1008 19:46:39.004854 2700 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:46:39.090457 kubelet[2700]: I1008 19:46:39.090363 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/300b32855389052e2eba1c6093ae1a0e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"300b32855389052e2eba1c6093ae1a0e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:46:39.090570 kubelet[2700]: I1008 19:46:39.090558 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/300b32855389052e2eba1c6093ae1a0e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"300b32855389052e2eba1c6093ae1a0e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:46:39.090613 kubelet[2700]: I1008 19:46:39.090594 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:39.090645 kubelet[2700]: I1008 19:46:39.090634 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:39.090667 kubelet[2700]: I1008 19:46:39.090659 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:39.090688 kubelet[2700]: I1008 19:46:39.090681 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/300b32855389052e2eba1c6093ae1a0e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"300b32855389052e2eba1c6093ae1a0e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:46:39.090773 kubelet[2700]: I1008 19:46:39.090710 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:39.090822 kubelet[2700]: I1008 19:46:39.090807 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:46:39.090860 kubelet[2700]: I1008 19:46:39.090834 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:46:39.322745 kubelet[2700]: E1008 19:46:39.322494 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:39.323814 kubelet[2700]: E1008 19:46:39.323748 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:39.323814 kubelet[2700]: E1008 19:46:39.323798 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:39.336463 sudo[2715]: pam_unix(sudo:session): session closed for user root Oct 8 19:46:39.871955 kubelet[2700]: I1008 19:46:39.871709 2700 apiserver.go:52] "Watching apiserver" Oct 8 19:46:39.889212 kubelet[2700]: I1008 19:46:39.889140 2700 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:46:39.922718 kubelet[2700]: E1008 19:46:39.920482 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:39.922718 kubelet[2700]: E1008 19:46:39.920522 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:39.927636 kubelet[2700]: E1008 19:46:39.927019 2700 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:46:39.927636 kubelet[2700]: E1008 19:46:39.927474 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:39.947634 kubelet[2700]: I1008 19:46:39.947470 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.947429144 podStartE2EDuration="947.429144ms" podCreationTimestamp="2024-10-08 19:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:46:39.947330082 +0000 UTC m=+1.138330823" watchObservedRunningTime="2024-10-08 19:46:39.947429144 +0000 UTC m=+1.138429885" Oct 8 19:46:39.947634 kubelet[2700]: I1008 19:46:39.947548 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.947530526 podStartE2EDuration="947.530526ms" podCreationTimestamp="2024-10-08 19:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:46:39.940771806 +0000 UTC m=+1.131772547" watchObservedRunningTime="2024-10-08 19:46:39.947530526 +0000 UTC m=+1.138531227" Oct 8 19:46:39.962041 kubelet[2700]: I1008 19:46:39.962005 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.961970607 podStartE2EDuration="961.970607ms" podCreationTimestamp="2024-10-08 19:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:46:39.95467641 +0000 UTC m=+1.145677151" watchObservedRunningTime="2024-10-08 19:46:39.961970607 +0000 UTC m=+1.152971348" Oct 8 19:46:40.921833 kubelet[2700]: E1008 19:46:40.921644 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:41.112722 sudo[1766]: pam_unix(sudo:session): session closed for user root Oct 8 19:46:41.115720 sshd[1759]: pam_unix(sshd:session): session closed for user core Oct 8 19:46:41.120856 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:44436.service: Deactivated successfully. Oct 8 19:46:41.123395 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:46:41.124161 systemd-logind[1533]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:46:41.125224 systemd-logind[1533]: Removed session 7. Oct 8 19:46:41.923588 kubelet[2700]: E1008 19:46:41.923562 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:44.594854 kubelet[2700]: E1008 19:46:44.594755 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:44.928201 kubelet[2700]: E1008 19:46:44.927835 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:45.105754 kubelet[2700]: E1008 19:46:45.105422 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:45.929114 kubelet[2700]: E1008 19:46:45.929037 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:46.312356 kubelet[2700]: E1008 19:46:46.312226 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:46.930080 kubelet[2700]: E1008 19:46:46.930043 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:46.930447 kubelet[2700]: E1008 19:46:46.930257 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:51.569588 kubelet[2700]: I1008 19:46:51.569541 2700 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:46:51.570328 containerd[1556]: time="2024-10-08T19:46:51.570219370Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:46:51.571299 kubelet[2700]: I1008 19:46:51.570586 2700 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:46:52.124240 update_engine[1538]: I1008 19:46:52.124173 1538 update_attempter.cc:509] Updating boot flags... Oct 8 19:46:52.151499 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2783) Oct 8 19:46:52.177443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2784) Oct 8 19:46:52.207463 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2784) Oct 8 19:46:52.510021 kubelet[2700]: I1008 19:46:52.509902 2700 topology_manager.go:215] "Topology Admit Handler" podUID="72bfa4cd-6631-4335-ac72-fda9b2fefb05" podNamespace="kube-system" podName="kube-proxy-jzxvt" Oct 8 19:46:52.518462 kubelet[2700]: I1008 19:46:52.518402 2700 topology_manager.go:215] "Topology Admit Handler" podUID="8d457cdd-0d35-4777-8396-2ffcad4ca706" podNamespace="kube-system" podName="cilium-h66dk" Oct 8 19:46:52.585010 kubelet[2700]: I1008 19:46:52.584959 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-bpf-maps\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585010 kubelet[2700]: I1008 19:46:52.585008 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cni-path\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585462 kubelet[2700]: I1008 19:46:52.585030 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-run\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585462 kubelet[2700]: I1008 19:46:52.585051 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d457cdd-0d35-4777-8396-2ffcad4ca706-clustermesh-secrets\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585462 kubelet[2700]: I1008 19:46:52.585073 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjd8\" (UniqueName: \"kubernetes.io/projected/8d457cdd-0d35-4777-8396-2ffcad4ca706-kube-api-access-lwjd8\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585462 kubelet[2700]: I1008 19:46:52.585097 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72bfa4cd-6631-4335-ac72-fda9b2fefb05-xtables-lock\") pod \"kube-proxy-jzxvt\" (UID: \"72bfa4cd-6631-4335-ac72-fda9b2fefb05\") " pod="kube-system/kube-proxy-jzxvt" Oct 8 19:46:52.585462 kubelet[2700]: I1008 19:46:52.585116 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-xtables-lock\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585594 kubelet[2700]: I1008 19:46:52.585136 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-host-proc-sys-kernel\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585594 kubelet[2700]: I1008 19:46:52.585155 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72bfa4cd-6631-4335-ac72-fda9b2fefb05-kube-proxy\") pod \"kube-proxy-jzxvt\" (UID: \"72bfa4cd-6631-4335-ac72-fda9b2fefb05\") " pod="kube-system/kube-proxy-jzxvt" Oct 8 19:46:52.585594 kubelet[2700]: I1008 19:46:52.585173 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-lib-modules\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585594 kubelet[2700]: I1008 19:46:52.585215 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-etc-cni-netd\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585594 kubelet[2700]: I1008 19:46:52.585234 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-config-path\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585594 kubelet[2700]: I1008 19:46:52.585252 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d457cdd-0d35-4777-8396-2ffcad4ca706-hubble-tls\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585711 kubelet[2700]: I1008 19:46:52.585275 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72bfa4cd-6631-4335-ac72-fda9b2fefb05-lib-modules\") pod \"kube-proxy-jzxvt\" (UID: \"72bfa4cd-6631-4335-ac72-fda9b2fefb05\") " pod="kube-system/kube-proxy-jzxvt" Oct 8 19:46:52.585711 kubelet[2700]: I1008 19:46:52.585295 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-cgroup\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585711 kubelet[2700]: I1008 19:46:52.585358 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-host-proc-sys-net\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585711 kubelet[2700]: I1008 19:46:52.585397 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-hostproc\") pod \"cilium-h66dk\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " pod="kube-system/cilium-h66dk" Oct 8 19:46:52.585711 kubelet[2700]: I1008 19:46:52.585447 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n499f\" (UniqueName: \"kubernetes.io/projected/72bfa4cd-6631-4335-ac72-fda9b2fefb05-kube-api-access-n499f\") pod \"kube-proxy-jzxvt\" (UID: \"72bfa4cd-6631-4335-ac72-fda9b2fefb05\") " pod="kube-system/kube-proxy-jzxvt" Oct 8 19:46:52.627584 kubelet[2700]: I1008 19:46:52.624612 2700 topology_manager.go:215] "Topology Admit Handler" podUID="49a6c1b9-51d3-4efa-9698-3b6b75449d01" podNamespace="kube-system" podName="cilium-operator-5cc964979-pj8vg" Oct 8 19:46:52.686483 kubelet[2700]: I1008 19:46:52.686392 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49a6c1b9-51d3-4efa-9698-3b6b75449d01-cilium-config-path\") pod \"cilium-operator-5cc964979-pj8vg\" (UID: \"49a6c1b9-51d3-4efa-9698-3b6b75449d01\") " pod="kube-system/cilium-operator-5cc964979-pj8vg" Oct 8 19:46:52.686896 kubelet[2700]: I1008 19:46:52.686866 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdxn\" (UniqueName: \"kubernetes.io/projected/49a6c1b9-51d3-4efa-9698-3b6b75449d01-kube-api-access-qcdxn\") pod \"cilium-operator-5cc964979-pj8vg\" (UID: \"49a6c1b9-51d3-4efa-9698-3b6b75449d01\") " pod="kube-system/cilium-operator-5cc964979-pj8vg" Oct 8 19:46:52.820774 kubelet[2700]: E1008 19:46:52.820648 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:52.821744 containerd[1556]: time="2024-10-08T19:46:52.821634803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jzxvt,Uid:72bfa4cd-6631-4335-ac72-fda9b2fefb05,Namespace:kube-system,Attempt:0,}" Oct 8 19:46:52.826173 kubelet[2700]: E1008 19:46:52.826079 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:52.826583 containerd[1556]: time="2024-10-08T19:46:52.826510532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h66dk,Uid:8d457cdd-0d35-4777-8396-2ffcad4ca706,Namespace:kube-system,Attempt:0,}" Oct 8 19:46:52.927573 kubelet[2700]: E1008 19:46:52.927532 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:52.928336 containerd[1556]: time="2024-10-08T19:46:52.927970334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pj8vg,Uid:49a6c1b9-51d3-4efa-9698-3b6b75449d01,Namespace:kube-system,Attempt:0,}" Oct 8 19:46:52.998028 containerd[1556]: time="2024-10-08T19:46:52.997839551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:46:52.998028 containerd[1556]: time="2024-10-08T19:46:52.997916479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:52.998028 containerd[1556]: time="2024-10-08T19:46:52.997941722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:46:52.998028 containerd[1556]: time="2024-10-08T19:46:52.997959284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:53.010658 containerd[1556]: time="2024-10-08T19:46:53.010472873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:46:53.010658 containerd[1556]: time="2024-10-08T19:46:53.010531719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:53.010658 containerd[1556]: time="2024-10-08T19:46:53.010545841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:46:53.010658 containerd[1556]: time="2024-10-08T19:46:53.010555642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:53.021811 containerd[1556]: time="2024-10-08T19:46:53.017895719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:46:53.021811 containerd[1556]: time="2024-10-08T19:46:53.017957325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:53.021811 containerd[1556]: time="2024-10-08T19:46:53.017976687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:46:53.022527 containerd[1556]: time="2024-10-08T19:46:53.022408185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:46:53.056487 containerd[1556]: time="2024-10-08T19:46:53.056378850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h66dk,Uid:8d457cdd-0d35-4777-8396-2ffcad4ca706,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\"" Oct 8 19:46:53.057284 containerd[1556]: time="2024-10-08T19:46:53.057236618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jzxvt,Uid:72bfa4cd-6631-4335-ac72-fda9b2fefb05,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c2f24dc375c4c2e2c666de9b22c662616210dd1dc467656aa90e9b43d48d815\"" Oct 8 19:46:53.059598 kubelet[2700]: E1008 19:46:53.058977 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:53.062424 containerd[1556]: time="2024-10-08T19:46:53.061310639Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 19:46:53.065152 kubelet[2700]: E1008 19:46:53.064541 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:53.068830 containerd[1556]: time="2024-10-08T19:46:53.068795091Z" level=info msg="CreateContainer within sandbox \"4c2f24dc375c4c2e2c666de9b22c662616210dd1dc467656aa90e9b43d48d815\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:46:53.080267 containerd[1556]: time="2024-10-08T19:46:53.080214429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pj8vg,Uid:49a6c1b9-51d3-4efa-9698-3b6b75449d01,Namespace:kube-system,Attempt:0,} returns sandbox id \"76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf\"" Oct 8 19:46:53.081967 kubelet[2700]: E1008 19:46:53.081930 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:53.136630 containerd[1556]: time="2024-10-08T19:46:53.136553482Z" level=info msg="CreateContainer within sandbox \"4c2f24dc375c4c2e2c666de9b22c662616210dd1dc467656aa90e9b43d48d815\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc0bf8367e247b35d29f971d0fcde5c02a5cb8f5d7d84fdc46f2baeb0a059017\"" Oct 8 19:46:53.138604 containerd[1556]: time="2024-10-08T19:46:53.137200189Z" level=info msg="StartContainer for \"dc0bf8367e247b35d29f971d0fcde5c02a5cb8f5d7d84fdc46f2baeb0a059017\"" Oct 8 19:46:53.188920 containerd[1556]: time="2024-10-08T19:46:53.188863920Z" level=info msg="StartContainer for \"dc0bf8367e247b35d29f971d0fcde5c02a5cb8f5d7d84fdc46f2baeb0a059017\" returns successfully" Oct 8 19:46:53.942435 kubelet[2700]: E1008 19:46:53.942387 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:53.957628 kubelet[2700]: I1008 19:46:53.957555 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jzxvt" podStartSLOduration=1.9574779869999999 podStartE2EDuration="1.957477987s" podCreationTimestamp="2024-10-08 19:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:46:53.953710038 +0000 UTC m=+15.144710779" watchObservedRunningTime="2024-10-08 19:46:53.957477987 +0000 UTC m=+15.148478768" Oct 8 19:46:56.302007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450060573.mount: Deactivated successfully. Oct 8 19:46:57.587453 containerd[1556]: time="2024-10-08T19:46:57.587366900Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:57.588516 containerd[1556]: time="2024-10-08T19:46:57.587892625Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651490" Oct 8 19:46:57.589280 containerd[1556]: time="2024-10-08T19:46:57.589010080Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:57.590645 containerd[1556]: time="2024-10-08T19:46:57.590610176Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.529253813s" Oct 8 19:46:57.590720 containerd[1556]: time="2024-10-08T19:46:57.590652060Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 8 19:46:57.592403 containerd[1556]: time="2024-10-08T19:46:57.592355965Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 19:46:57.597937 containerd[1556]: time="2024-10-08T19:46:57.597896877Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:46:57.786058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093237412.mount: Deactivated successfully. Oct 8 19:46:57.786685 containerd[1556]: time="2024-10-08T19:46:57.786637086Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\"" Oct 8 19:46:57.787371 containerd[1556]: time="2024-10-08T19:46:57.787341826Z" level=info msg="StartContainer for \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\"" Oct 8 19:46:57.833033 containerd[1556]: time="2024-10-08T19:46:57.832983077Z" level=info msg="StartContainer for \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\" returns successfully" Oct 8 19:46:57.896382 containerd[1556]: time="2024-10-08T19:46:57.890023259Z" level=info msg="shim disconnected" id=0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944 namespace=k8s.io Oct 8 19:46:57.896382 containerd[1556]: time="2024-10-08T19:46:57.896288513Z" level=warning msg="cleaning up after shim disconnected" id=0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944 namespace=k8s.io Oct 8 19:46:57.896382 containerd[1556]: time="2024-10-08T19:46:57.896305074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:46:57.976192 kubelet[2700]: E1008 19:46:57.975973 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:57.978702 containerd[1556]: time="2024-10-08T19:46:57.978661335Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:46:58.013589 containerd[1556]: time="2024-10-08T19:46:58.013539502Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\"" Oct 8 19:46:58.014492 containerd[1556]: time="2024-10-08T19:46:58.014211476Z" level=info msg="StartContainer for \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\"" Oct 8 19:46:58.056133 containerd[1556]: time="2024-10-08T19:46:58.056086566Z" level=info msg="StartContainer for \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\" returns successfully" Oct 8 19:46:58.074046 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:46:58.074870 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:46:58.074941 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:46:58.083770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:46:58.100339 containerd[1556]: time="2024-10-08T19:46:58.100154555Z" level=info msg="shim disconnected" id=d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b namespace=k8s.io Oct 8 19:46:58.100339 containerd[1556]: time="2024-10-08T19:46:58.100209200Z" level=warning msg="cleaning up after shim disconnected" id=d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b namespace=k8s.io Oct 8 19:46:58.100339 containerd[1556]: time="2024-10-08T19:46:58.100218080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:46:58.101108 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:46:58.744890 containerd[1556]: time="2024-10-08T19:46:58.744006308Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:58.744890 containerd[1556]: time="2024-10-08T19:46:58.744847976Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138346" Oct 8 19:46:58.745582 containerd[1556]: time="2024-10-08T19:46:58.745548633Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:46:58.747736 containerd[1556]: time="2024-10-08T19:46:58.747689248Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.155289679s" Oct 8 19:46:58.747908 containerd[1556]: time="2024-10-08T19:46:58.747888384Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 8 19:46:58.749886 containerd[1556]: time="2024-10-08T19:46:58.749845303Z" level=info msg="CreateContainer within sandbox \"76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 19:46:58.759396 containerd[1556]: time="2024-10-08T19:46:58.759347077Z" level=info msg="CreateContainer within sandbox \"76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\"" Oct 8 19:46:58.761204 containerd[1556]: time="2024-10-08T19:46:58.761164945Z" level=info msg="StartContainer for \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\"" Oct 8 19:46:58.785208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944-rootfs.mount: Deactivated successfully. Oct 8 19:46:58.809946 containerd[1556]: time="2024-10-08T19:46:58.809897114Z" level=info msg="StartContainer for \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\" returns successfully" Oct 8 19:46:58.983448 kubelet[2700]: E1008 19:46:58.978600 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:58.988789 kubelet[2700]: E1008 19:46:58.988745 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:59.004625 containerd[1556]: time="2024-10-08T19:46:59.001626727Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:46:59.024180 kubelet[2700]: I1008 19:46:59.024115 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-pj8vg" podStartSLOduration=1.3616830420000001 podStartE2EDuration="7.024062203s" podCreationTimestamp="2024-10-08 19:46:52 +0000 UTC" firstStartedPulling="2024-10-08 19:46:53.08584685 +0000 UTC m=+14.276847551" lastFinishedPulling="2024-10-08 19:46:58.748225971 +0000 UTC m=+19.939226712" observedRunningTime="2024-10-08 19:46:58.994918141 +0000 UTC m=+20.185918882" watchObservedRunningTime="2024-10-08 19:46:59.024062203 +0000 UTC m=+20.215062944" Oct 8 19:46:59.093172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430320678.mount: Deactivated successfully. Oct 8 19:46:59.094329 containerd[1556]: time="2024-10-08T19:46:59.094204465Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\"" Oct 8 19:46:59.094791 containerd[1556]: time="2024-10-08T19:46:59.094765508Z" level=info msg="StartContainer for \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\"" Oct 8 19:46:59.146253 containerd[1556]: time="2024-10-08T19:46:59.146209114Z" level=info msg="StartContainer for \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\" returns successfully" Oct 8 19:46:59.180750 containerd[1556]: time="2024-10-08T19:46:59.180674878Z" level=info msg="shim disconnected" id=118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa namespace=k8s.io Oct 8 19:46:59.180750 containerd[1556]: time="2024-10-08T19:46:59.180735323Z" level=warning msg="cleaning up after shim disconnected" id=118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa namespace=k8s.io Oct 8 19:46:59.180750 containerd[1556]: time="2024-10-08T19:46:59.180744923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:46:59.197573 containerd[1556]: time="2024-10-08T19:46:59.197504508Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:46:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:46:59.783777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa-rootfs.mount: Deactivated successfully. Oct 8 19:46:59.992644 kubelet[2700]: E1008 19:46:59.992601 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:59.993603 kubelet[2700]: E1008 19:46:59.993404 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:46:59.998483 containerd[1556]: time="2024-10-08T19:46:59.998442435Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:47:00.010441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121873178.mount: Deactivated successfully. Oct 8 19:47:00.011248 containerd[1556]: time="2024-10-08T19:47:00.011115349Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\"" Oct 8 19:47:00.012967 containerd[1556]: time="2024-10-08T19:47:00.012911003Z" level=info msg="StartContainer for \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\"" Oct 8 19:47:00.064713 containerd[1556]: time="2024-10-08T19:47:00.064593735Z" level=info msg="StartContainer for \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\" returns successfully" Oct 8 19:47:00.081056 containerd[1556]: time="2024-10-08T19:47:00.080991277Z" level=info msg="shim disconnected" id=ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef namespace=k8s.io Oct 8 19:47:00.081056 containerd[1556]: time="2024-10-08T19:47:00.081050441Z" level=warning msg="cleaning up after shim disconnected" id=ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef namespace=k8s.io Oct 8 19:47:00.081056 containerd[1556]: time="2024-10-08T19:47:00.081061362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:47:00.795333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef-rootfs.mount: Deactivated successfully. Oct 8 19:47:00.996603 kubelet[2700]: E1008 19:47:00.996562 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:01.000457 containerd[1556]: time="2024-10-08T19:47:01.000394431Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:47:01.014630 containerd[1556]: time="2024-10-08T19:47:01.014585765Z" level=info msg="CreateContainer within sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\"" Oct 8 19:47:01.014783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2852618538.mount: Deactivated successfully. Oct 8 19:47:01.016573 containerd[1556]: time="2024-10-08T19:47:01.016539184Z" level=info msg="StartContainer for \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\"" Oct 8 19:47:01.079802 containerd[1556]: time="2024-10-08T19:47:01.079759977Z" level=info msg="StartContainer for \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\" returns successfully" Oct 8 19:47:01.188995 kubelet[2700]: I1008 19:47:01.188966 2700 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:47:01.209632 kubelet[2700]: I1008 19:47:01.209595 2700 topology_manager.go:215] "Topology Admit Handler" podUID="1e3f852f-28cb-45dc-89a0-1600dab1f924" podNamespace="kube-system" podName="coredns-76f75df574-br7qd" Oct 8 19:47:01.212112 kubelet[2700]: I1008 19:47:01.212078 2700 topology_manager.go:215] "Topology Admit Handler" podUID="cbea6f4c-8863-4c91-b39d-c8dd0a115918" podNamespace="kube-system" podName="coredns-76f75df574-wqdzf" Oct 8 19:47:01.251917 kubelet[2700]: I1008 19:47:01.251836 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk586\" (UniqueName: \"kubernetes.io/projected/cbea6f4c-8863-4c91-b39d-c8dd0a115918-kube-api-access-mk586\") pod \"coredns-76f75df574-wqdzf\" (UID: \"cbea6f4c-8863-4c91-b39d-c8dd0a115918\") " pod="kube-system/coredns-76f75df574-wqdzf" Oct 8 19:47:01.251917 kubelet[2700]: I1008 19:47:01.251894 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6pll\" (UniqueName: \"kubernetes.io/projected/1e3f852f-28cb-45dc-89a0-1600dab1f924-kube-api-access-n6pll\") pod \"coredns-76f75df574-br7qd\" (UID: \"1e3f852f-28cb-45dc-89a0-1600dab1f924\") " pod="kube-system/coredns-76f75df574-br7qd" Oct 8 19:47:01.252090 kubelet[2700]: I1008 19:47:01.251989 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbea6f4c-8863-4c91-b39d-c8dd0a115918-config-volume\") pod \"coredns-76f75df574-wqdzf\" (UID: \"cbea6f4c-8863-4c91-b39d-c8dd0a115918\") " pod="kube-system/coredns-76f75df574-wqdzf" Oct 8 19:47:01.252090 kubelet[2700]: I1008 19:47:01.252055 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e3f852f-28cb-45dc-89a0-1600dab1f924-config-volume\") pod \"coredns-76f75df574-br7qd\" (UID: \"1e3f852f-28cb-45dc-89a0-1600dab1f924\") " pod="kube-system/coredns-76f75df574-br7qd" Oct 8 19:47:01.515990 kubelet[2700]: E1008 19:47:01.515888 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:01.517691 containerd[1556]: time="2024-10-08T19:47:01.517615593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-br7qd,Uid:1e3f852f-28cb-45dc-89a0-1600dab1f924,Namespace:kube-system,Attempt:0,}" Oct 8 19:47:01.522157 kubelet[2700]: E1008 19:47:01.522120 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:01.522597 containerd[1556]: time="2024-10-08T19:47:01.522562026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wqdzf,Uid:cbea6f4c-8863-4c91-b39d-c8dd0a115918,Namespace:kube-system,Attempt:0,}" Oct 8 19:47:02.006490 kubelet[2700]: E1008 19:47:02.006458 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:03.007902 kubelet[2700]: E1008 19:47:03.007864 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:03.274284 systemd-networkd[1237]: cilium_host: Link UP Oct 8 19:47:03.274441 systemd-networkd[1237]: cilium_net: Link UP Oct 8 19:47:03.274605 systemd-networkd[1237]: cilium_net: Gained carrier Oct 8 19:47:03.274722 systemd-networkd[1237]: cilium_host: Gained carrier Oct 8 19:47:03.274807 systemd-networkd[1237]: cilium_net: Gained IPv6LL Oct 8 19:47:03.275959 systemd-networkd[1237]: cilium_host: Gained IPv6LL Oct 8 19:47:03.356710 systemd-networkd[1237]: cilium_vxlan: Link UP Oct 8 19:47:03.356716 systemd-networkd[1237]: cilium_vxlan: Gained carrier Oct 8 19:47:03.656432 kernel: NET: Registered PF_ALG protocol family Oct 8 19:47:04.009880 kubelet[2700]: E1008 19:47:04.009599 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:04.242172 systemd-networkd[1237]: lxc_health: Link UP Oct 8 19:47:04.247522 systemd-networkd[1237]: lxc_health: Gained carrier Oct 8 19:47:04.648665 systemd-networkd[1237]: lxcb0ceaabea1e5: Link UP Oct 8 19:47:04.657448 kernel: eth0: renamed from tmp06d0d Oct 8 19:47:04.669628 systemd-networkd[1237]: lxc2562517c4a0e: Link UP Oct 8 19:47:04.671949 systemd-networkd[1237]: lxcb0ceaabea1e5: Gained carrier Oct 8 19:47:04.674594 kernel: eth0: renamed from tmp2b734 Oct 8 19:47:04.679043 systemd-networkd[1237]: lxc2562517c4a0e: Gained carrier Oct 8 19:47:04.848010 kubelet[2700]: I1008 19:47:04.847514 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h66dk" podStartSLOduration=8.316087039 podStartE2EDuration="12.847460603s" podCreationTimestamp="2024-10-08 19:46:52 +0000 UTC" firstStartedPulling="2024-10-08 19:46:53.060760742 +0000 UTC m=+14.251761483" lastFinishedPulling="2024-10-08 19:46:57.592134306 +0000 UTC m=+18.783135047" observedRunningTime="2024-10-08 19:47:02.021672352 +0000 UTC m=+23.212673093" watchObservedRunningTime="2024-10-08 19:47:04.847460603 +0000 UTC m=+26.038461344" Oct 8 19:47:05.011626 kubelet[2700]: E1008 19:47:05.011517 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:05.349604 systemd-networkd[1237]: lxc_health: Gained IPv6LL Oct 8 19:47:05.349884 systemd-networkd[1237]: cilium_vxlan: Gained IPv6LL Oct 8 19:47:06.245594 systemd-networkd[1237]: lxc2562517c4a0e: Gained IPv6LL Oct 8 19:47:06.630517 systemd-networkd[1237]: lxcb0ceaabea1e5: Gained IPv6LL Oct 8 19:47:08.193874 containerd[1556]: time="2024-10-08T19:47:08.193740397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:47:08.193874 containerd[1556]: time="2024-10-08T19:47:08.193840282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:47:08.194275 containerd[1556]: time="2024-10-08T19:47:08.193875604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:47:08.194275 containerd[1556]: time="2024-10-08T19:47:08.193889605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:47:08.194275 containerd[1556]: time="2024-10-08T19:47:08.193939608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:47:08.194275 containerd[1556]: time="2024-10-08T19:47:08.194037413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:47:08.194275 containerd[1556]: time="2024-10-08T19:47:08.194059414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:47:08.194275 containerd[1556]: time="2024-10-08T19:47:08.194075615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:47:08.218278 systemd-resolved[1440]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:47:08.220172 systemd-resolved[1440]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:47:08.235244 containerd[1556]: time="2024-10-08T19:47:08.234726622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wqdzf,Uid:cbea6f4c-8863-4c91-b39d-c8dd0a115918,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b7340abb1b0fddb2f313d5501915d911e34ecb4f8d4e506a3867411900bc0f8\"" Oct 8 19:47:08.235482 kubelet[2700]: E1008 19:47:08.235461 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:08.238371 containerd[1556]: time="2024-10-08T19:47:08.237972998Z" level=info msg="CreateContainer within sandbox \"2b7340abb1b0fddb2f313d5501915d911e34ecb4f8d4e506a3867411900bc0f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:47:08.241718 containerd[1556]: time="2024-10-08T19:47:08.241664038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-br7qd,Uid:1e3f852f-28cb-45dc-89a0-1600dab1f924,Namespace:kube-system,Attempt:0,} returns sandbox id \"06d0d0cd1e68e9fd1e0d7699723def076bda49f44acb4317fe397269e9855013\"" Oct 8 19:47:08.242994 kubelet[2700]: E1008 19:47:08.242962 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:08.244983 containerd[1556]: time="2024-10-08T19:47:08.244812569Z" level=info msg="CreateContainer within sandbox \"06d0d0cd1e68e9fd1e0d7699723def076bda49f44acb4317fe397269e9855013\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:47:08.255811 containerd[1556]: time="2024-10-08T19:47:08.255765604Z" level=info msg="CreateContainer within sandbox \"2b7340abb1b0fddb2f313d5501915d911e34ecb4f8d4e506a3867411900bc0f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfb4688c93f460b578b8e24e6955096ce2d4c864fec5e1519c87429a165ee64b\"" Oct 8 19:47:08.256346 containerd[1556]: time="2024-10-08T19:47:08.256304113Z" level=info msg="StartContainer for \"bfb4688c93f460b578b8e24e6955096ce2d4c864fec5e1519c87429a165ee64b\"" Oct 8 19:47:08.260233 containerd[1556]: time="2024-10-08T19:47:08.260188964Z" level=info msg="CreateContainer within sandbox \"06d0d0cd1e68e9fd1e0d7699723def076bda49f44acb4317fe397269e9855013\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f73752150347f9d6487083449c910232db84efa993fd3e36e9e5914d4bc7789d\"" Oct 8 19:47:08.260876 containerd[1556]: time="2024-10-08T19:47:08.260831119Z" level=info msg="StartContainer for \"f73752150347f9d6487083449c910232db84efa993fd3e36e9e5914d4bc7789d\"" Oct 8 19:47:08.309571 containerd[1556]: time="2024-10-08T19:47:08.308548349Z" level=info msg="StartContainer for \"f73752150347f9d6487083449c910232db84efa993fd3e36e9e5914d4bc7789d\" returns successfully" Oct 8 19:47:08.322027 containerd[1556]: time="2024-10-08T19:47:08.318965674Z" level=info msg="StartContainer for \"bfb4688c93f460b578b8e24e6955096ce2d4c864fec5e1519c87429a165ee64b\" returns successfully" Oct 8 19:47:08.723668 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:55620.service - OpenSSH per-connection server daemon (10.0.0.1:55620). Oct 8 19:47:08.753335 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 55620 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:08.754778 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:08.758365 systemd-logind[1533]: New session 8 of user core. Oct 8 19:47:08.775746 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:47:08.902353 sshd[4096]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:08.906344 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:55620.service: Deactivated successfully. Oct 8 19:47:08.908464 systemd-logind[1533]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:47:08.908553 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:47:08.909737 systemd-logind[1533]: Removed session 8. Oct 8 19:47:09.022024 kubelet[2700]: E1008 19:47:09.021836 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:09.026356 kubelet[2700]: E1008 19:47:09.026329 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:09.030789 kubelet[2700]: I1008 19:47:09.030688 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wqdzf" podStartSLOduration=17.030592247 podStartE2EDuration="17.030592247s" podCreationTimestamp="2024-10-08 19:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:47:09.030031337 +0000 UTC m=+30.221032118" watchObservedRunningTime="2024-10-08 19:47:09.030592247 +0000 UTC m=+30.221592988" Oct 8 19:47:09.041240 kubelet[2700]: I1008 19:47:09.041179 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-br7qd" podStartSLOduration=17.04114428 podStartE2EDuration="17.04114428s" podCreationTimestamp="2024-10-08 19:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:47:09.039948057 +0000 UTC m=+30.230948798" watchObservedRunningTime="2024-10-08 19:47:09.04114428 +0000 UTC m=+30.232145021" Oct 8 19:47:10.028473 kubelet[2700]: E1008 19:47:10.028436 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:10.028473 kubelet[2700]: E1008 19:47:10.028458 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:11.031080 kubelet[2700]: E1008 19:47:11.030696 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:11.031080 kubelet[2700]: E1008 19:47:11.030722 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:13.918673 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:39772.service - OpenSSH per-connection server daemon (10.0.0.1:39772). Oct 8 19:47:13.956043 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 39772 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:13.957446 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:13.964449 systemd-logind[1533]: New session 9 of user core. Oct 8 19:47:13.978729 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:47:14.019166 kubelet[2700]: I1008 19:47:14.019103 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:47:14.020270 kubelet[2700]: E1008 19:47:14.020184 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:14.048349 kubelet[2700]: E1008 19:47:14.047906 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:14.119899 sshd[4122]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:14.123456 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:39772.service: Deactivated successfully. Oct 8 19:47:14.125522 systemd-logind[1533]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:47:14.126080 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:47:14.126948 systemd-logind[1533]: Removed session 9. Oct 8 19:47:19.130667 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:39782.service - OpenSSH per-connection server daemon (10.0.0.1:39782). Oct 8 19:47:19.164351 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 39782 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:19.165725 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:19.170609 systemd-logind[1533]: New session 10 of user core. Oct 8 19:47:19.179698 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:47:19.311900 sshd[4139]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:19.323668 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:39796.service - OpenSSH per-connection server daemon (10.0.0.1:39796). Oct 8 19:47:19.324187 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:39782.service: Deactivated successfully. Oct 8 19:47:19.325840 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:47:19.327327 systemd-logind[1533]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:47:19.328456 systemd-logind[1533]: Removed session 10. Oct 8 19:47:19.352247 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 39796 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:19.353456 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:19.357474 systemd-logind[1533]: New session 11 of user core. Oct 8 19:47:19.371696 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:47:19.506650 sshd[4153]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:19.517822 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:39800.service - OpenSSH per-connection server daemon (10.0.0.1:39800). Oct 8 19:47:19.519641 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:39796.service: Deactivated successfully. Oct 8 19:47:19.521141 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:47:19.528071 systemd-logind[1533]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:47:19.529202 systemd-logind[1533]: Removed session 11. Oct 8 19:47:19.557375 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 39800 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:19.558648 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:19.562667 systemd-logind[1533]: New session 12 of user core. Oct 8 19:47:19.569670 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:47:19.678299 sshd[4166]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:19.681560 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:39800.service: Deactivated successfully. Oct 8 19:47:19.683773 systemd-logind[1533]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:47:19.683858 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:47:19.686397 systemd-logind[1533]: Removed session 12. Oct 8 19:47:24.692664 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:40490.service - OpenSSH per-connection server daemon (10.0.0.1:40490). Oct 8 19:47:24.720841 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 40490 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:24.721880 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:24.726007 systemd-logind[1533]: New session 13 of user core. Oct 8 19:47:24.745651 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:47:24.852543 sshd[4185]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:24.855198 systemd-logind[1533]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:47:24.855363 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:40490.service: Deactivated successfully. Oct 8 19:47:24.857976 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:47:24.858874 systemd-logind[1533]: Removed session 13. Oct 8 19:47:29.864693 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:40498.service - OpenSSH per-connection server daemon (10.0.0.1:40498). Oct 8 19:47:29.893113 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 40498 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:29.894363 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:29.898208 systemd-logind[1533]: New session 14 of user core. Oct 8 19:47:29.902659 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:47:30.016537 sshd[4200]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:30.028660 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:40508.service - OpenSSH per-connection server daemon (10.0.0.1:40508). Oct 8 19:47:30.029048 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:40498.service: Deactivated successfully. Oct 8 19:47:30.032218 systemd-logind[1533]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:47:30.032370 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:47:30.033619 systemd-logind[1533]: Removed session 14. Oct 8 19:47:30.060902 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 40508 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:30.062123 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:30.067500 systemd-logind[1533]: New session 15 of user core. Oct 8 19:47:30.073658 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:47:30.292598 sshd[4212]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:30.304712 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:40518.service - OpenSSH per-connection server daemon (10.0.0.1:40518). Oct 8 19:47:30.305108 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:40508.service: Deactivated successfully. Oct 8 19:47:30.307124 systemd-logind[1533]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:47:30.307753 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:47:30.309802 systemd-logind[1533]: Removed session 15. Oct 8 19:47:30.333030 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 40518 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:30.334303 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:30.338121 systemd-logind[1533]: New session 16 of user core. Oct 8 19:47:30.345734 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:47:31.576660 sshd[4225]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:31.588843 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:40524.service - OpenSSH per-connection server daemon (10.0.0.1:40524). Oct 8 19:47:31.589931 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:40518.service: Deactivated successfully. Oct 8 19:47:31.593610 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:47:31.595290 systemd-logind[1533]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:47:31.599535 systemd-logind[1533]: Removed session 16. Oct 8 19:47:31.632964 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 40524 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:31.634394 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:31.639337 systemd-logind[1533]: New session 17 of user core. Oct 8 19:47:31.648704 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:47:31.865083 sshd[4248]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:31.871722 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:40532.service - OpenSSH per-connection server daemon (10.0.0.1:40532). Oct 8 19:47:31.872117 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:40524.service: Deactivated successfully. Oct 8 19:47:31.875098 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:47:31.875501 systemd-logind[1533]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:47:31.877931 systemd-logind[1533]: Removed session 17. Oct 8 19:47:31.902712 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 40532 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:31.904075 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:31.909600 systemd-logind[1533]: New session 18 of user core. Oct 8 19:47:31.925812 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:47:32.035003 sshd[4262]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:32.038385 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:40532.service: Deactivated successfully. Oct 8 19:47:32.041309 systemd-logind[1533]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:47:32.041757 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:47:32.042816 systemd-logind[1533]: Removed session 18. Oct 8 19:47:37.046858 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:53498.service - OpenSSH per-connection server daemon (10.0.0.1:53498). Oct 8 19:47:37.078011 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 53498 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:37.079374 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:37.083179 systemd-logind[1533]: New session 19 of user core. Oct 8 19:47:37.091767 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:47:37.200829 sshd[4280]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:37.204524 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:53498.service: Deactivated successfully. Oct 8 19:47:37.206623 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:47:37.206635 systemd-logind[1533]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:47:37.208300 systemd-logind[1533]: Removed session 19. Oct 8 19:47:42.219709 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:53504.service - OpenSSH per-connection server daemon (10.0.0.1:53504). Oct 8 19:47:42.254963 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:42.256829 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:42.262140 systemd-logind[1533]: New session 20 of user core. Oct 8 19:47:42.273701 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:47:42.387801 sshd[4300]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:42.391024 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:53504.service: Deactivated successfully. Oct 8 19:47:42.394645 systemd-logind[1533]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:47:42.394742 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:47:42.396430 systemd-logind[1533]: Removed session 20. Oct 8 19:47:47.401715 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:48088.service - OpenSSH per-connection server daemon (10.0.0.1:48088). Oct 8 19:47:47.429743 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 48088 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:47.430914 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:47.435142 systemd-logind[1533]: New session 21 of user core. Oct 8 19:47:47.445709 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:47:47.548199 sshd[4315]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:47.551403 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:48088.service: Deactivated successfully. Oct 8 19:47:47.553351 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:47:47.553352 systemd-logind[1533]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:47:47.554960 systemd-logind[1533]: Removed session 21. Oct 8 19:47:52.561653 systemd[1]: Started sshd@21-10.0.0.91:22-10.0.0.1:58830.service - OpenSSH per-connection server daemon (10.0.0.1:58830). Oct 8 19:47:52.589091 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 58830 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:52.590237 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:52.593949 systemd-logind[1533]: New session 22 of user core. Oct 8 19:47:52.600698 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:47:52.707128 sshd[4330]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:52.710498 systemd[1]: sshd@21-10.0.0.91:22-10.0.0.1:58830.service: Deactivated successfully. Oct 8 19:47:52.713118 systemd-logind[1533]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:47:52.713123 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:47:52.716332 systemd-logind[1533]: Removed session 22. Oct 8 19:47:54.906450 kubelet[2700]: E1008 19:47:54.906124 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:47:57.726698 systemd[1]: Started sshd@22-10.0.0.91:22-10.0.0.1:58846.service - OpenSSH per-connection server daemon (10.0.0.1:58846). Oct 8 19:47:57.755717 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 58846 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:57.757086 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:57.761452 systemd-logind[1533]: New session 23 of user core. Oct 8 19:47:57.772670 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:47:57.879690 sshd[4347]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:57.891670 systemd[1]: Started sshd@23-10.0.0.91:22-10.0.0.1:58848.service - OpenSSH per-connection server daemon (10.0.0.1:58848). Oct 8 19:47:57.892051 systemd[1]: sshd@22-10.0.0.91:22-10.0.0.1:58846.service: Deactivated successfully. Oct 8 19:47:57.894925 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:47:57.896132 systemd-logind[1533]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:47:57.896963 systemd-logind[1533]: Removed session 23. Oct 8 19:47:57.921403 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 58848 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:47:57.922793 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:57.926689 systemd-logind[1533]: New session 24 of user core. Oct 8 19:47:57.938723 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:47:59.762362 containerd[1556]: time="2024-10-08T19:47:59.762169100Z" level=info msg="StopContainer for \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\" with timeout 30 (s)" Oct 8 19:47:59.775495 containerd[1556]: time="2024-10-08T19:47:59.772888651Z" level=info msg="Stop container \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\" with signal terminated" Oct 8 19:47:59.791869 containerd[1556]: time="2024-10-08T19:47:59.791827054Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:47:59.797663 containerd[1556]: time="2024-10-08T19:47:59.797466127Z" level=info msg="StopContainer for \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\" with timeout 2 (s)" Oct 8 19:47:59.797935 containerd[1556]: time="2024-10-08T19:47:59.797911483Z" level=info msg="Stop container \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\" with signal terminated" Oct 8 19:47:59.803609 systemd-networkd[1237]: lxc_health: Link DOWN Oct 8 19:47:59.803621 systemd-networkd[1237]: lxc_health: Lost carrier Oct 8 19:47:59.810481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a-rootfs.mount: Deactivated successfully. Oct 8 19:47:59.813679 containerd[1556]: time="2024-10-08T19:47:59.813605073Z" level=info msg="shim disconnected" id=7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a namespace=k8s.io Oct 8 19:47:59.813679 containerd[1556]: time="2024-10-08T19:47:59.813668432Z" level=warning msg="cleaning up after shim disconnected" id=7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a namespace=k8s.io Oct 8 19:47:59.813679 containerd[1556]: time="2024-10-08T19:47:59.813682992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:47:59.854464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad-rootfs.mount: Deactivated successfully. Oct 8 19:47:59.861403 containerd[1556]: time="2024-10-08T19:47:59.861347476Z" level=info msg="StopContainer for \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\" returns successfully" Oct 8 19:47:59.861931 containerd[1556]: time="2024-10-08T19:47:59.861656633Z" level=info msg="shim disconnected" id=81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad namespace=k8s.io Oct 8 19:47:59.861931 containerd[1556]: time="2024-10-08T19:47:59.861700553Z" level=warning msg="cleaning up after shim disconnected" id=81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad namespace=k8s.io Oct 8 19:47:59.861931 containerd[1556]: time="2024-10-08T19:47:59.861708193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:47:59.864212 containerd[1556]: time="2024-10-08T19:47:59.864169932Z" level=info msg="StopPodSandbox for \"76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf\"" Oct 8 19:47:59.864292 containerd[1556]: time="2024-10-08T19:47:59.864223772Z" level=info msg="Container to stop \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:47:59.866746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf-shm.mount: Deactivated successfully. Oct 8 19:47:59.886294 containerd[1556]: time="2024-10-08T19:47:59.886183789Z" level=info msg="StopContainer for \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\" returns successfully" Oct 8 19:47:59.889488 containerd[1556]: time="2024-10-08T19:47:59.888576330Z" level=info msg="StopPodSandbox for \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\"" Oct 8 19:47:59.889488 containerd[1556]: time="2024-10-08T19:47:59.888637369Z" level=info msg="Container to stop \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:47:59.889488 containerd[1556]: time="2024-10-08T19:47:59.888683609Z" level=info msg="Container to stop \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:47:59.889488 containerd[1556]: time="2024-10-08T19:47:59.888762448Z" level=info msg="Container to stop \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:47:59.889488 containerd[1556]: time="2024-10-08T19:47:59.888844487Z" level=info msg="Container to stop \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:47:59.889488 containerd[1556]: time="2024-10-08T19:47:59.888856327Z" level=info msg="Container to stop \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:47:59.890872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f-shm.mount: Deactivated successfully. Oct 8 19:47:59.899103 containerd[1556]: time="2024-10-08T19:47:59.899056562Z" level=info msg="shim disconnected" id=76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf namespace=k8s.io Oct 8 19:47:59.900017 containerd[1556]: time="2024-10-08T19:47:59.899775716Z" level=warning msg="cleaning up after shim disconnected" id=76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf namespace=k8s.io Oct 8 19:47:59.900017 containerd[1556]: time="2024-10-08T19:47:59.899805916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:47:59.912755 containerd[1556]: time="2024-10-08T19:47:59.912627730Z" level=info msg="TearDown network for sandbox \"76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf\" successfully" Oct 8 19:47:59.913514 containerd[1556]: time="2024-10-08T19:47:59.912944407Z" level=info msg="StopPodSandbox for \"76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf\" returns successfully" Oct 8 19:47:59.916788 containerd[1556]: time="2024-10-08T19:47:59.916745135Z" level=info msg="shim disconnected" id=e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f namespace=k8s.io Oct 8 19:47:59.916899 containerd[1556]: time="2024-10-08T19:47:59.916881894Z" level=warning msg="cleaning up after shim disconnected" id=e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f namespace=k8s.io Oct 8 19:47:59.917080 containerd[1556]: time="2024-10-08T19:47:59.916938334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:47:59.930684 containerd[1556]: time="2024-10-08T19:47:59.930629580Z" level=info msg="TearDown network for sandbox \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" successfully" Oct 8 19:47:59.930684 containerd[1556]: time="2024-10-08T19:47:59.930668660Z" level=info msg="StopPodSandbox for \"e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f\" returns successfully" Oct 8 19:47:59.982881 kubelet[2700]: I1008 19:47:59.982694 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-etc-cni-netd\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.982881 kubelet[2700]: I1008 19:47:59.982740 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-host-proc-sys-net\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.982881 kubelet[2700]: I1008 19:47:59.982763 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-host-proc-sys-kernel\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.982881 kubelet[2700]: I1008 19:47:59.982791 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-config-path\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.982881 kubelet[2700]: I1008 19:47:59.982809 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-hostproc\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.982881 kubelet[2700]: I1008 19:47:59.982829 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49a6c1b9-51d3-4efa-9698-3b6b75449d01-cilium-config-path\") pod \"49a6c1b9-51d3-4efa-9698-3b6b75449d01\" (UID: \"49a6c1b9-51d3-4efa-9698-3b6b75449d01\") " Oct 8 19:47:59.984766 kubelet[2700]: I1008 19:47:59.982846 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-xtables-lock\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.984766 kubelet[2700]: I1008 19:47:59.982865 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d457cdd-0d35-4777-8396-2ffcad4ca706-hubble-tls\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.984766 kubelet[2700]: I1008 19:47:59.982883 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-cgroup\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.984766 kubelet[2700]: I1008 19:47:59.982900 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-lib-modules\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.984766 kubelet[2700]: I1008 19:47:59.982940 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-bpf-maps\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.984766 kubelet[2700]: I1008 19:47:59.982955 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cni-path\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.984962 kubelet[2700]: I1008 19:47:59.982977 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcdxn\" (UniqueName: \"kubernetes.io/projected/49a6c1b9-51d3-4efa-9698-3b6b75449d01-kube-api-access-qcdxn\") pod \"49a6c1b9-51d3-4efa-9698-3b6b75449d01\" (UID: \"49a6c1b9-51d3-4efa-9698-3b6b75449d01\") " Oct 8 19:47:59.984962 kubelet[2700]: I1008 19:47:59.982996 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-run\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.984962 kubelet[2700]: I1008 19:47:59.983016 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwjd8\" (UniqueName: \"kubernetes.io/projected/8d457cdd-0d35-4777-8396-2ffcad4ca706-kube-api-access-lwjd8\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.984962 kubelet[2700]: I1008 19:47:59.983037 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d457cdd-0d35-4777-8396-2ffcad4ca706-clustermesh-secrets\") pod \"8d457cdd-0d35-4777-8396-2ffcad4ca706\" (UID: \"8d457cdd-0d35-4777-8396-2ffcad4ca706\") " Oct 8 19:47:59.986116 kubelet[2700]: I1008 19:47:59.986086 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.986265 kubelet[2700]: I1008 19:47:59.986092 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.986315 kubelet[2700]: I1008 19:47:59.986124 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.986393 kubelet[2700]: I1008 19:47:59.986379 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cni-path" (OuterVolumeSpecName: "cni-path") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.986517 kubelet[2700]: I1008 19:47:59.986504 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.986586 kubelet[2700]: I1008 19:47:59.986575 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.986672 kubelet[2700]: I1008 19:47:59.986659 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.987094 kubelet[2700]: I1008 19:47:59.987019 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.988724 kubelet[2700]: I1008 19:47:59.988691 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:47:59.988801 kubelet[2700]: I1008 19:47:59.988747 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.989454 kubelet[2700]: I1008 19:47:59.989070 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a6c1b9-51d3-4efa-9698-3b6b75449d01-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "49a6c1b9-51d3-4efa-9698-3b6b75449d01" (UID: "49a6c1b9-51d3-4efa-9698-3b6b75449d01"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:47:59.989454 kubelet[2700]: I1008 19:47:59.989115 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-hostproc" (OuterVolumeSpecName: "hostproc") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:47:59.991506 kubelet[2700]: I1008 19:47:59.991470 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d457cdd-0d35-4777-8396-2ffcad4ca706-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:47:59.992841 kubelet[2700]: I1008 19:47:59.992802 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d457cdd-0d35-4777-8396-2ffcad4ca706-kube-api-access-lwjd8" (OuterVolumeSpecName: "kube-api-access-lwjd8") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "kube-api-access-lwjd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:47:59.993460 kubelet[2700]: I1008 19:47:59.993406 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a6c1b9-51d3-4efa-9698-3b6b75449d01-kube-api-access-qcdxn" (OuterVolumeSpecName: "kube-api-access-qcdxn") pod "49a6c1b9-51d3-4efa-9698-3b6b75449d01" (UID: "49a6c1b9-51d3-4efa-9698-3b6b75449d01"). InnerVolumeSpecName "kube-api-access-qcdxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:47:59.994092 kubelet[2700]: I1008 19:47:59.994047 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d457cdd-0d35-4777-8396-2ffcad4ca706-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8d457cdd-0d35-4777-8396-2ffcad4ca706" (UID: "8d457cdd-0d35-4777-8396-2ffcad4ca706"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:48:00.083518 kubelet[2700]: I1008 19:48:00.083471 2700 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083518 kubelet[2700]: I1008 19:48:00.083511 2700 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083518 kubelet[2700]: I1008 19:48:00.083524 2700 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083725 kubelet[2700]: I1008 19:48:00.083536 2700 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083725 kubelet[2700]: I1008 19:48:00.083546 2700 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083725 kubelet[2700]: I1008 19:48:00.083557 2700 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083725 kubelet[2700]: I1008 19:48:00.083566 2700 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d457cdd-0d35-4777-8396-2ffcad4ca706-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083725 kubelet[2700]: I1008 19:48:00.083576 2700 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083725 kubelet[2700]: I1008 19:48:00.083586 2700 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49a6c1b9-51d3-4efa-9698-3b6b75449d01-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083725 kubelet[2700]: I1008 19:48:00.083598 2700 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083725 kubelet[2700]: I1008 19:48:00.083607 2700 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083917 kubelet[2700]: I1008 19:48:00.083624 2700 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083917 kubelet[2700]: I1008 19:48:00.083633 2700 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d457cdd-0d35-4777-8396-2ffcad4ca706-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083917 kubelet[2700]: I1008 19:48:00.083643 2700 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lwjd8\" (UniqueName: \"kubernetes.io/projected/8d457cdd-0d35-4777-8396-2ffcad4ca706-kube-api-access-lwjd8\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083917 kubelet[2700]: I1008 19:48:00.083654 2700 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qcdxn\" (UniqueName: \"kubernetes.io/projected/49a6c1b9-51d3-4efa-9698-3b6b75449d01-kube-api-access-qcdxn\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.083917 kubelet[2700]: I1008 19:48:00.083663 2700 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d457cdd-0d35-4777-8396-2ffcad4ca706-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 19:48:00.142572 kubelet[2700]: I1008 19:48:00.142441 2700 scope.go:117] "RemoveContainer" containerID="81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad" Oct 8 19:48:00.152443 containerd[1556]: time="2024-10-08T19:48:00.152056548Z" level=info msg="RemoveContainer for \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\"" Oct 8 19:48:00.159868 containerd[1556]: time="2024-10-08T19:48:00.159730130Z" level=info msg="RemoveContainer for \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\" returns successfully" Oct 8 19:48:00.161203 kubelet[2700]: I1008 19:48:00.161080 2700 scope.go:117] "RemoveContainer" containerID="ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef" Oct 8 19:48:00.162433 containerd[1556]: time="2024-10-08T19:48:00.162327111Z" level=info msg="RemoveContainer for \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\"" Oct 8 19:48:00.166635 containerd[1556]: time="2024-10-08T19:48:00.166212042Z" level=info msg="RemoveContainer for \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\" returns successfully" Oct 8 19:48:00.166773 kubelet[2700]: I1008 19:48:00.166451 2700 scope.go:117] "RemoveContainer" containerID="118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa" Oct 8 19:48:00.168813 containerd[1556]: time="2024-10-08T19:48:00.168751063Z" level=info msg="RemoveContainer for \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\"" Oct 8 19:48:00.173150 containerd[1556]: time="2024-10-08T19:48:00.172299957Z" level=info msg="RemoveContainer for \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\" returns successfully" Oct 8 19:48:00.173494 kubelet[2700]: I1008 19:48:00.173470 2700 scope.go:117] "RemoveContainer" containerID="d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b" Oct 8 19:48:00.174961 containerd[1556]: time="2024-10-08T19:48:00.174925177Z" level=info msg="RemoveContainer for \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\"" Oct 8 19:48:00.178407 containerd[1556]: time="2024-10-08T19:48:00.178357071Z" level=info msg="RemoveContainer for \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\" returns successfully" Oct 8 19:48:00.178817 kubelet[2700]: I1008 19:48:00.178731 2700 scope.go:117] "RemoveContainer" containerID="0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944" Oct 8 19:48:00.179886 containerd[1556]: time="2024-10-08T19:48:00.179858580Z" level=info msg="RemoveContainer for \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\"" Oct 8 19:48:00.182590 containerd[1556]: time="2024-10-08T19:48:00.182369201Z" level=info msg="RemoveContainer for \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\" returns successfully" Oct 8 19:48:00.182690 kubelet[2700]: I1008 19:48:00.182614 2700 scope.go:117] "RemoveContainer" containerID="81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad" Oct 8 19:48:00.184444 containerd[1556]: time="2024-10-08T19:48:00.182879798Z" level=error msg="ContainerStatus for \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\": not found" Oct 8 19:48:00.184554 kubelet[2700]: E1008 19:48:00.184399 2700 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\": not found" containerID="81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad" Oct 8 19:48:00.188291 kubelet[2700]: I1008 19:48:00.188243 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad"} err="failed to get container status \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"81252cd1c9970add90e4a22ea5ba1b9a6888f2fe44397b364740a4bc522557ad\": not found" Oct 8 19:48:00.188291 kubelet[2700]: I1008 19:48:00.188295 2700 scope.go:117] "RemoveContainer" containerID="ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef" Oct 8 19:48:00.189321 containerd[1556]: time="2024-10-08T19:48:00.188641715Z" level=error msg="ContainerStatus for \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\": not found" Oct 8 19:48:00.189321 containerd[1556]: time="2024-10-08T19:48:00.188959232Z" level=error msg="ContainerStatus for \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\": not found" Oct 8 19:48:00.189463 kubelet[2700]: E1008 19:48:00.188798 2700 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\": not found" containerID="ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef" Oct 8 19:48:00.189463 kubelet[2700]: I1008 19:48:00.188836 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef"} err="failed to get container status \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed9b59ccf0a8d5f52be6bddc26e3d6716ddad9b74640cad6bbb0ada177e705ef\": not found" Oct 8 19:48:00.189463 kubelet[2700]: I1008 19:48:00.188846 2700 scope.go:117] "RemoveContainer" containerID="118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa" Oct 8 19:48:00.189463 kubelet[2700]: E1008 19:48:00.189078 2700 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\": not found" containerID="118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa" Oct 8 19:48:00.189463 kubelet[2700]: I1008 19:48:00.189113 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa"} err="failed to get container status \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"118328d8814831e716a7b17fb0c4bf435d2fd780783c096a1a71f2f260ebdbaa\": not found" Oct 8 19:48:00.189463 kubelet[2700]: I1008 19:48:00.189124 2700 scope.go:117] "RemoveContainer" containerID="d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b" Oct 8 19:48:00.189606 containerd[1556]: time="2024-10-08T19:48:00.189328790Z" level=error msg="ContainerStatus for \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\": not found" Oct 8 19:48:00.189639 kubelet[2700]: E1008 19:48:00.189448 2700 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\": not found" containerID="d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b" Oct 8 19:48:00.189639 kubelet[2700]: I1008 19:48:00.189491 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b"} err="failed to get container status \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d92b2d80dc263eb6fb9336216e32333f44d36de023726aa26d2807cbbe1a844b\": not found" Oct 8 19:48:00.189639 kubelet[2700]: I1008 19:48:00.189503 2700 scope.go:117] "RemoveContainer" containerID="0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944" Oct 8 19:48:00.189700 containerd[1556]: time="2024-10-08T19:48:00.189624067Z" level=error msg="ContainerStatus for \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\": not found" Oct 8 19:48:00.189742 kubelet[2700]: E1008 19:48:00.189723 2700 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\": not found" containerID="0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944" Oct 8 19:48:00.189769 kubelet[2700]: I1008 19:48:00.189748 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944"} err="failed to get container status \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a0d9e19de79e49414ce61881a0c9f83eb2a24afa12f3bfa47e421024bc86944\": not found" Oct 8 19:48:00.189769 kubelet[2700]: I1008 19:48:00.189762 2700 scope.go:117] "RemoveContainer" containerID="7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a" Oct 8 19:48:00.190740 containerd[1556]: time="2024-10-08T19:48:00.190705219Z" level=info msg="RemoveContainer for \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\"" Oct 8 19:48:00.198878 containerd[1556]: time="2024-10-08T19:48:00.198827559Z" level=info msg="RemoveContainer for \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\" returns successfully" Oct 8 19:48:00.199143 kubelet[2700]: I1008 19:48:00.199113 2700 scope.go:117] "RemoveContainer" containerID="7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a" Oct 8 19:48:00.199489 containerd[1556]: time="2024-10-08T19:48:00.199449034Z" level=error msg="ContainerStatus for \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\": not found" Oct 8 19:48:00.199640 kubelet[2700]: E1008 19:48:00.199615 2700 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\": not found" containerID="7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a" Oct 8 19:48:00.199678 kubelet[2700]: I1008 19:48:00.199659 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a"} err="failed to get container status \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\": rpc error: code = NotFound desc = an error occurred when try to find container \"7011c013591c57c2ed3d7f7820b4673b51e0d509dae7ee8cf8954b91a3c3d03a\": not found" Oct 8 19:48:00.779454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76a44ed4418f608b821bb5e24cf2c958e30159fe94bd15a0ffeda6f5faa3cccf-rootfs.mount: Deactivated successfully. Oct 8 19:48:00.779618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ea7c80682a5942733b4de3bd57fe84e7c885035072dc4d08cb6098b709486f-rootfs.mount: Deactivated successfully. Oct 8 19:48:00.779711 systemd[1]: var-lib-kubelet-pods-49a6c1b9\x2d51d3\x2d4efa\x2d9698\x2d3b6b75449d01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqcdxn.mount: Deactivated successfully. Oct 8 19:48:00.779801 systemd[1]: var-lib-kubelet-pods-8d457cdd\x2d0d35\x2d4777\x2d8396\x2d2ffcad4ca706-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlwjd8.mount: Deactivated successfully. Oct 8 19:48:00.779891 systemd[1]: var-lib-kubelet-pods-8d457cdd\x2d0d35\x2d4777\x2d8396\x2d2ffcad4ca706-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 19:48:00.779977 systemd[1]: var-lib-kubelet-pods-8d457cdd\x2d0d35\x2d4777\x2d8396\x2d2ffcad4ca706-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 19:48:00.910511 kubelet[2700]: I1008 19:48:00.910461 2700 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="49a6c1b9-51d3-4efa-9698-3b6b75449d01" path="/var/lib/kubelet/pods/49a6c1b9-51d3-4efa-9698-3b6b75449d01/volumes" Oct 8 19:48:00.910972 kubelet[2700]: I1008 19:48:00.910945 2700 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8d457cdd-0d35-4777-8396-2ffcad4ca706" path="/var/lib/kubelet/pods/8d457cdd-0d35-4777-8396-2ffcad4ca706/volumes" Oct 8 19:48:01.685571 sshd[4359]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:01.694745 systemd[1]: Started sshd@24-10.0.0.91:22-10.0.0.1:58850.service - OpenSSH per-connection server daemon (10.0.0.1:58850). Oct 8 19:48:01.695169 systemd[1]: sshd@23-10.0.0.91:22-10.0.0.1:58848.service: Deactivated successfully. Oct 8 19:48:01.698203 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:48:01.698508 systemd-logind[1533]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:48:01.699818 systemd-logind[1533]: Removed session 24. Oct 8 19:48:01.724774 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 58850 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:48:01.726105 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:01.732423 systemd-logind[1533]: New session 25 of user core. Oct 8 19:48:01.742728 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:48:02.582982 sshd[4527]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:02.595029 systemd[1]: Started sshd@25-10.0.0.91:22-10.0.0.1:44706.service - OpenSSH per-connection server daemon (10.0.0.1:44706). Oct 8 19:48:02.595234 kubelet[2700]: I1008 19:48:02.595064 2700 topology_manager.go:215] "Topology Admit Handler" podUID="8aaa3c5c-798b-41d6-81c5-3ffe9ec09670" podNamespace="kube-system" podName="cilium-z7q52" Oct 8 19:48:02.595234 kubelet[2700]: E1008 19:48:02.595115 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d457cdd-0d35-4777-8396-2ffcad4ca706" containerName="cilium-agent" Oct 8 19:48:02.595234 kubelet[2700]: E1008 19:48:02.595125 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d457cdd-0d35-4777-8396-2ffcad4ca706" containerName="mount-cgroup" Oct 8 19:48:02.595234 kubelet[2700]: E1008 19:48:02.595132 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d457cdd-0d35-4777-8396-2ffcad4ca706" containerName="apply-sysctl-overwrites" Oct 8 19:48:02.595234 kubelet[2700]: E1008 19:48:02.595138 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49a6c1b9-51d3-4efa-9698-3b6b75449d01" containerName="cilium-operator" Oct 8 19:48:02.595234 kubelet[2700]: E1008 19:48:02.595144 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d457cdd-0d35-4777-8396-2ffcad4ca706" containerName="mount-bpf-fs" Oct 8 19:48:02.595234 kubelet[2700]: E1008 19:48:02.595151 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d457cdd-0d35-4777-8396-2ffcad4ca706" containerName="clean-cilium-state" Oct 8 19:48:02.597174 systemd[1]: sshd@24-10.0.0.91:22-10.0.0.1:58850.service: Deactivated successfully. Oct 8 19:48:02.604855 kubelet[2700]: I1008 19:48:02.604783 2700 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a6c1b9-51d3-4efa-9698-3b6b75449d01" containerName="cilium-operator" Oct 8 19:48:02.604855 kubelet[2700]: I1008 19:48:02.604825 2700 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d457cdd-0d35-4777-8396-2ffcad4ca706" containerName="cilium-agent" Oct 8 19:48:02.607698 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:48:02.616857 systemd-logind[1533]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:48:02.625482 systemd-logind[1533]: Removed session 25. Oct 8 19:48:02.657071 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 44706 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:48:02.658442 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:02.662926 systemd-logind[1533]: New session 26 of user core. Oct 8 19:48:02.669699 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:48:02.698588 kubelet[2700]: I1008 19:48:02.698543 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-cilium-config-path\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698588 kubelet[2700]: I1008 19:48:02.698599 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-hubble-tls\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698752 kubelet[2700]: I1008 19:48:02.698675 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-cilium-cgroup\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698752 kubelet[2700]: I1008 19:48:02.698723 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-cni-path\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698797 kubelet[2700]: I1008 19:48:02.698758 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfjc9\" (UniqueName: \"kubernetes.io/projected/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-kube-api-access-nfjc9\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698818 kubelet[2700]: I1008 19:48:02.698797 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-bpf-maps\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698818 kubelet[2700]: I1008 19:48:02.698816 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-xtables-lock\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698916 kubelet[2700]: I1008 19:48:02.698836 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-cilium-ipsec-secrets\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698916 kubelet[2700]: I1008 19:48:02.698856 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-host-proc-sys-net\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698916 kubelet[2700]: I1008 19:48:02.698878 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-cilium-run\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698916 kubelet[2700]: I1008 19:48:02.698897 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-lib-modules\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.698916 kubelet[2700]: I1008 19:48:02.698915 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-clustermesh-secrets\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.699014 kubelet[2700]: I1008 19:48:02.698935 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-host-proc-sys-kernel\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.699014 kubelet[2700]: I1008 19:48:02.698954 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-hostproc\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.699014 kubelet[2700]: I1008 19:48:02.698975 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8aaa3c5c-798b-41d6-81c5-3ffe9ec09670-etc-cni-netd\") pod \"cilium-z7q52\" (UID: \"8aaa3c5c-798b-41d6-81c5-3ffe9ec09670\") " pod="kube-system/cilium-z7q52" Oct 8 19:48:02.719836 sshd[4541]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:02.732814 systemd[1]: Started sshd@26-10.0.0.91:22-10.0.0.1:44712.service - OpenSSH per-connection server daemon (10.0.0.1:44712). Oct 8 19:48:02.733234 systemd[1]: sshd@25-10.0.0.91:22-10.0.0.1:44706.service: Deactivated successfully. Oct 8 19:48:02.736170 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:48:02.736217 systemd-logind[1533]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:48:02.737844 systemd-logind[1533]: Removed session 26. Oct 8 19:48:02.761903 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 44712 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:48:02.763301 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:02.767844 systemd-logind[1533]: New session 27 of user core. Oct 8 19:48:02.778729 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:48:02.914595 kubelet[2700]: E1008 19:48:02.914479 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:02.915070 containerd[1556]: time="2024-10-08T19:48:02.915026554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7q52,Uid:8aaa3c5c-798b-41d6-81c5-3ffe9ec09670,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:02.937620 containerd[1556]: time="2024-10-08T19:48:02.937506623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:02.937620 containerd[1556]: time="2024-10-08T19:48:02.937570182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:02.937620 containerd[1556]: time="2024-10-08T19:48:02.937599182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:02.937620 containerd[1556]: time="2024-10-08T19:48:02.937611542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:02.969514 containerd[1556]: time="2024-10-08T19:48:02.969469836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7q52,Uid:8aaa3c5c-798b-41d6-81c5-3ffe9ec09670,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\"" Oct 8 19:48:02.970206 kubelet[2700]: E1008 19:48:02.970161 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:02.977698 containerd[1556]: time="2024-10-08T19:48:02.977647988Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:48:02.992478 containerd[1556]: time="2024-10-08T19:48:02.992313142Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"944fd9b1f6b97e93c924ab7d95dd59b452380d28028ffbba7e91397a80b76d9b\"" Oct 8 19:48:02.993152 containerd[1556]: time="2024-10-08T19:48:02.993119898Z" level=info msg="StartContainer for \"944fd9b1f6b97e93c924ab7d95dd59b452380d28028ffbba7e91397a80b76d9b\"" Oct 8 19:48:03.042935 containerd[1556]: time="2024-10-08T19:48:03.042809959Z" level=info msg="StartContainer for \"944fd9b1f6b97e93c924ab7d95dd59b452380d28028ffbba7e91397a80b76d9b\" returns successfully" Oct 8 19:48:03.092121 containerd[1556]: time="2024-10-08T19:48:03.091895950Z" level=info msg="shim disconnected" id=944fd9b1f6b97e93c924ab7d95dd59b452380d28028ffbba7e91397a80b76d9b namespace=k8s.io Oct 8 19:48:03.092121 containerd[1556]: time="2024-10-08T19:48:03.091952150Z" level=warning msg="cleaning up after shim disconnected" id=944fd9b1f6b97e93c924ab7d95dd59b452380d28028ffbba7e91397a80b76d9b namespace=k8s.io Oct 8 19:48:03.092121 containerd[1556]: time="2024-10-08T19:48:03.091960750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:03.163540 kubelet[2700]: E1008 19:48:03.163212 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:03.168377 containerd[1556]: time="2024-10-08T19:48:03.167730606Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:48:03.179241 containerd[1556]: time="2024-10-08T19:48:03.179187868Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49a676b7355396e6dee9cdb347e2abcd52b4320d93e9e73218fbb9818a2af640\"" Oct 8 19:48:03.179886 containerd[1556]: time="2024-10-08T19:48:03.179851545Z" level=info msg="StartContainer for \"49a676b7355396e6dee9cdb347e2abcd52b4320d93e9e73218fbb9818a2af640\"" Oct 8 19:48:03.234277 containerd[1556]: time="2024-10-08T19:48:03.234224309Z" level=info msg="StartContainer for \"49a676b7355396e6dee9cdb347e2abcd52b4320d93e9e73218fbb9818a2af640\" returns successfully" Oct 8 19:48:03.259444 containerd[1556]: time="2024-10-08T19:48:03.259350861Z" level=info msg="shim disconnected" id=49a676b7355396e6dee9cdb347e2abcd52b4320d93e9e73218fbb9818a2af640 namespace=k8s.io Oct 8 19:48:03.259637 containerd[1556]: time="2024-10-08T19:48:03.259529821Z" level=warning msg="cleaning up after shim disconnected" id=49a676b7355396e6dee9cdb347e2abcd52b4320d93e9e73218fbb9818a2af640 namespace=k8s.io Oct 8 19:48:03.259637 containerd[1556]: time="2024-10-08T19:48:03.259544220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:03.966991 kubelet[2700]: E1008 19:48:03.966948 2700 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:48:04.170857 kubelet[2700]: E1008 19:48:04.170803 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:04.178824 containerd[1556]: time="2024-10-08T19:48:04.178772932Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:48:04.196857 containerd[1556]: time="2024-10-08T19:48:04.196802975Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90d7493b0ce41f499fccb256518ce1eb6d6807ac47f846887e4df6856724cf93\"" Oct 8 19:48:04.198678 containerd[1556]: time="2024-10-08T19:48:04.198519167Z" level=info msg="StartContainer for \"90d7493b0ce41f499fccb256518ce1eb6d6807ac47f846887e4df6856724cf93\"" Oct 8 19:48:04.247005 containerd[1556]: time="2024-10-08T19:48:04.246716719Z" level=info msg="StartContainer for \"90d7493b0ce41f499fccb256518ce1eb6d6807ac47f846887e4df6856724cf93\" returns successfully" Oct 8 19:48:04.273243 containerd[1556]: time="2024-10-08T19:48:04.273169404Z" level=info msg="shim disconnected" id=90d7493b0ce41f499fccb256518ce1eb6d6807ac47f846887e4df6856724cf93 namespace=k8s.io Oct 8 19:48:04.273243 containerd[1556]: time="2024-10-08T19:48:04.273234444Z" level=warning msg="cleaning up after shim disconnected" id=90d7493b0ce41f499fccb256518ce1eb6d6807ac47f846887e4df6856724cf93 namespace=k8s.io Oct 8 19:48:04.273243 containerd[1556]: time="2024-10-08T19:48:04.273243164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:04.804457 systemd[1]: run-containerd-runc-k8s.io-90d7493b0ce41f499fccb256518ce1eb6d6807ac47f846887e4df6856724cf93-runc.vPHmvZ.mount: Deactivated successfully. Oct 8 19:48:04.804624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d7493b0ce41f499fccb256518ce1eb6d6807ac47f846887e4df6856724cf93-rootfs.mount: Deactivated successfully. Oct 8 19:48:04.906923 kubelet[2700]: E1008 19:48:04.905550 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:05.172775 kubelet[2700]: E1008 19:48:05.172712 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:05.174779 containerd[1556]: time="2024-10-08T19:48:05.174739672Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:48:05.216649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3166838500.mount: Deactivated successfully. Oct 8 19:48:05.217744 containerd[1556]: time="2024-10-08T19:48:05.217701757Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e1afc7abae5d4561eb149b80888d6a1763f87a8b038c6ee12f8d48c12e060f3\"" Oct 8 19:48:05.219995 containerd[1556]: time="2024-10-08T19:48:05.219967269Z" level=info msg="StartContainer for \"1e1afc7abae5d4561eb149b80888d6a1763f87a8b038c6ee12f8d48c12e060f3\"" Oct 8 19:48:05.268332 containerd[1556]: time="2024-10-08T19:48:05.268190936Z" level=info msg="StartContainer for \"1e1afc7abae5d4561eb149b80888d6a1763f87a8b038c6ee12f8d48c12e060f3\" returns successfully" Oct 8 19:48:05.312649 containerd[1556]: time="2024-10-08T19:48:05.312557376Z" level=info msg="shim disconnected" id=1e1afc7abae5d4561eb149b80888d6a1763f87a8b038c6ee12f8d48c12e060f3 namespace=k8s.io Oct 8 19:48:05.312649 containerd[1556]: time="2024-10-08T19:48:05.312637896Z" level=warning msg="cleaning up after shim disconnected" id=1e1afc7abae5d4561eb149b80888d6a1763f87a8b038c6ee12f8d48c12e060f3 namespace=k8s.io Oct 8 19:48:05.312649 containerd[1556]: time="2024-10-08T19:48:05.312647216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:05.905567 kubelet[2700]: E1008 19:48:05.905532 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:06.180728 kubelet[2700]: E1008 19:48:06.180626 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:06.188700 containerd[1556]: time="2024-10-08T19:48:06.185536005Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:48:06.205331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777120476.mount: Deactivated successfully. Oct 8 19:48:06.208443 containerd[1556]: time="2024-10-08T19:48:06.208215339Z" level=info msg="CreateContainer within sandbox \"f3bc2e8a79688968114bad4c9eaebdc194d9d97a10f2619f1fb064248c817ba9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83dbdc54d827bfc95843834d676b81f4cfca9d0c2cd59697b55c0fba57eb2685\"" Oct 8 19:48:06.209152 containerd[1556]: time="2024-10-08T19:48:06.209120616Z" level=info msg="StartContainer for \"83dbdc54d827bfc95843834d676b81f4cfca9d0c2cd59697b55c0fba57eb2685\"" Oct 8 19:48:06.263586 containerd[1556]: time="2024-10-08T19:48:06.263508179Z" level=info msg="StartContainer for \"83dbdc54d827bfc95843834d676b81f4cfca9d0c2cd59697b55c0fba57eb2685\" returns successfully" Oct 8 19:48:06.549454 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 8 19:48:06.905822 kubelet[2700]: E1008 19:48:06.905718 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:07.207799 kubelet[2700]: E1008 19:48:07.207683 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:07.227279 kubelet[2700]: I1008 19:48:07.227066 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-z7q52" podStartSLOduration=5.227025062 podStartE2EDuration="5.227025062s" podCreationTimestamp="2024-10-08 19:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:07.225573345 +0000 UTC m=+88.416574086" watchObservedRunningTime="2024-10-08 19:48:07.227025062 +0000 UTC m=+88.418025803" Oct 8 19:48:08.916196 kubelet[2700]: E1008 19:48:08.916053 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:09.130557 systemd[1]: run-containerd-runc-k8s.io-83dbdc54d827bfc95843834d676b81f4cfca9d0c2cd59697b55c0fba57eb2685-runc.wu2buP.mount: Deactivated successfully. Oct 8 19:48:09.391130 systemd-networkd[1237]: lxc_health: Link UP Oct 8 19:48:09.418453 systemd-networkd[1237]: lxc_health: Gained carrier Oct 8 19:48:10.885703 systemd-networkd[1237]: lxc_health: Gained IPv6LL Oct 8 19:48:10.916912 kubelet[2700]: E1008 19:48:10.916746 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:11.216437 kubelet[2700]: E1008 19:48:11.216190 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:12.220523 kubelet[2700]: E1008 19:48:12.220484 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:48:15.500575 systemd[1]: run-containerd-runc-k8s.io-83dbdc54d827bfc95843834d676b81f4cfca9d0c2cd59697b55c0fba57eb2685-runc.SaHk6M.mount: Deactivated successfully. Oct 8 19:48:15.553831 sshd[4550]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:15.557779 systemd[1]: sshd@26-10.0.0.91:22-10.0.0.1:44712.service: Deactivated successfully. Oct 8 19:48:15.561407 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:48:15.562289 systemd-logind[1533]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:48:15.563259 systemd-logind[1533]: Removed session 27.