Apr 12 18:28:06.726296 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 12 18:28:06.726316 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Apr 12 17:21:24 -00 2024 Apr 12 18:28:06.726324 kernel: efi: EFI v2.70 by EDK II Apr 12 18:28:06.726330 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Apr 12 18:28:06.726335 kernel: random: crng init done Apr 12 18:28:06.726340 kernel: ACPI: Early table checksum verification disabled Apr 12 18:28:06.726347 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Apr 12 18:28:06.726353 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 12 18:28:06.726359 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726364 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726369 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726375 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726380 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726385 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726393 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726399 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726405 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:28:06.726411 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 12 18:28:06.726416 kernel: NUMA: Failed to initialise from firmware Apr 12 18:28:06.726423 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 12 18:28:06.726428 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] Apr 12 18:28:06.726434 kernel: Zone ranges: Apr 12 18:28:06.726450 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 12 18:28:06.726457 kernel: DMA32 empty Apr 12 18:28:06.726463 kernel: Normal empty Apr 12 18:28:06.726468 kernel: Movable zone start for each node Apr 12 18:28:06.726474 kernel: Early memory node ranges Apr 12 18:28:06.726480 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Apr 12 18:28:06.726485 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Apr 12 18:28:06.726491 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Apr 12 18:28:06.726497 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Apr 12 18:28:06.726502 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Apr 12 18:28:06.726508 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Apr 12 18:28:06.726514 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Apr 12 18:28:06.726519 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 12 18:28:06.726527 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 12 18:28:06.726532 kernel: psci: probing for conduit method from ACPI. Apr 12 18:28:06.726538 kernel: psci: PSCIv1.1 detected in firmware. Apr 12 18:28:06.726544 kernel: psci: Using standard PSCI v0.2 function IDs Apr 12 18:28:06.726549 kernel: psci: Trusted OS migration not required Apr 12 18:28:06.726558 kernel: psci: SMC Calling Convention v1.1 Apr 12 18:28:06.726564 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 12 18:28:06.726571 kernel: ACPI: SRAT not present Apr 12 18:28:06.726578 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Apr 12 18:28:06.726584 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Apr 12 18:28:06.726590 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 12 18:28:06.726596 kernel: Detected PIPT I-cache on CPU0 Apr 12 18:28:06.726602 kernel: CPU features: detected: GIC system register CPU interface Apr 12 18:28:06.726608 kernel: CPU features: detected: Hardware dirty bit management Apr 12 18:28:06.726614 kernel: CPU features: detected: Spectre-v4 Apr 12 18:28:06.726620 kernel: CPU features: detected: Spectre-BHB Apr 12 18:28:06.726627 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 12 18:28:06.726633 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 12 18:28:06.726639 kernel: CPU features: detected: ARM erratum 1418040 Apr 12 18:28:06.726645 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 12 18:28:06.726651 kernel: Policy zone: DMA Apr 12 18:28:06.726658 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:28:06.726665 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:28:06.726671 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:28:06.726677 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:28:06.726683 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:28:06.726690 kernel: Memory: 2457472K/2572288K available (9792K kernel code, 2092K rwdata, 7568K rodata, 36352K init, 777K bss, 114816K reserved, 0K cma-reserved) Apr 12 18:28:06.726702 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 12 18:28:06.726709 kernel: trace event string verifier disabled Apr 12 18:28:06.726715 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 12 18:28:06.726722 kernel: rcu: RCU event tracing is enabled. Apr 12 18:28:06.726728 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 12 18:28:06.726734 kernel: Trampoline variant of Tasks RCU enabled. Apr 12 18:28:06.726740 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:28:06.726747 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:28:06.726753 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 12 18:28:06.726758 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 12 18:28:06.726764 kernel: GICv3: 256 SPIs implemented Apr 12 18:28:06.726771 kernel: GICv3: 0 Extended SPIs implemented Apr 12 18:28:06.726777 kernel: GICv3: Distributor has no Range Selector support Apr 12 18:28:06.726783 kernel: Root IRQ handler: gic_handle_irq Apr 12 18:28:06.726789 kernel: GICv3: 16 PPIs implemented Apr 12 18:28:06.726795 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 12 18:28:06.726801 kernel: ACPI: SRAT not present Apr 12 18:28:06.726807 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 12 18:28:06.726821 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Apr 12 18:28:06.726828 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Apr 12 18:28:06.726834 kernel: GICv3: using LPI property table @0x00000000400d0000 Apr 12 18:28:06.726840 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Apr 12 18:28:06.726846 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:28:06.726854 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 12 18:28:06.726860 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 12 18:28:06.726866 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 12 18:28:06.726872 kernel: arm-pv: using stolen time PV Apr 12 18:28:06.726879 kernel: Console: colour dummy device 80x25 Apr 12 18:28:06.726885 kernel: ACPI: Core revision 20210730 Apr 12 18:28:06.726891 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 12 18:28:06.726897 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:28:06.726904 kernel: LSM: Security Framework initializing Apr 12 18:28:06.726910 kernel: SELinux: Initializing. Apr 12 18:28:06.726917 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:28:06.726923 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:28:06.726930 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:28:06.726936 kernel: Platform MSI: ITS@0x8080000 domain created Apr 12 18:28:06.726942 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 12 18:28:06.726948 kernel: Remapping and enabling EFI services. Apr 12 18:28:06.726954 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:28:06.726960 kernel: Detected PIPT I-cache on CPU1 Apr 12 18:28:06.726966 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 12 18:28:06.726974 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Apr 12 18:28:06.726980 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:28:06.726986 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 12 18:28:06.726993 kernel: Detected PIPT I-cache on CPU2 Apr 12 18:28:06.726999 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 12 18:28:06.727005 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Apr 12 18:28:06.727011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:28:06.727017 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 12 18:28:06.727023 kernel: Detected PIPT I-cache on CPU3 Apr 12 18:28:06.727030 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 12 18:28:06.727037 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Apr 12 18:28:06.727044 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:28:06.727050 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 12 18:28:06.727056 kernel: smp: Brought up 1 node, 4 CPUs Apr 12 18:28:06.727066 kernel: SMP: Total of 4 processors activated. Apr 12 18:28:06.727074 kernel: CPU features: detected: 32-bit EL0 Support Apr 12 18:28:06.727081 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 12 18:28:06.727087 kernel: CPU features: detected: Common not Private translations Apr 12 18:28:06.727094 kernel: CPU features: detected: CRC32 instructions Apr 12 18:28:06.727100 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 12 18:28:06.727107 kernel: CPU features: detected: LSE atomic instructions Apr 12 18:28:06.727113 kernel: CPU features: detected: Privileged Access Never Apr 12 18:28:06.727121 kernel: CPU features: detected: RAS Extension Support Apr 12 18:28:06.727128 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 12 18:28:06.727134 kernel: CPU: All CPU(s) started at EL1 Apr 12 18:28:06.727140 kernel: alternatives: patching kernel code Apr 12 18:28:06.727148 kernel: devtmpfs: initialized Apr 12 18:28:06.727155 kernel: KASLR enabled Apr 12 18:28:06.727162 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:28:06.727168 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 12 18:28:06.727175 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:28:06.727181 kernel: SMBIOS 3.0.0 present. Apr 12 18:28:06.727188 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Apr 12 18:28:06.727194 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:28:06.727201 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 12 18:28:06.727207 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 12 18:28:06.727215 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 12 18:28:06.727221 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:28:06.727228 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Apr 12 18:28:06.727234 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:28:06.727241 kernel: cpuidle: using governor menu Apr 12 18:28:06.727247 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 12 18:28:06.727254 kernel: ASID allocator initialised with 32768 entries Apr 12 18:28:06.727260 kernel: ACPI: bus type PCI registered Apr 12 18:28:06.727267 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:28:06.727275 kernel: Serial: AMBA PL011 UART driver Apr 12 18:28:06.727281 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:28:06.727288 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Apr 12 18:28:06.727294 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:28:06.727301 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Apr 12 18:28:06.727307 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:28:06.727314 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 12 18:28:06.727321 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:28:06.727327 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:28:06.727335 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:28:06.727341 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:28:06.727347 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:28:06.727354 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:28:06.727360 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:28:06.727367 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:28:06.727374 kernel: ACPI: Interpreter enabled Apr 12 18:28:06.727380 kernel: ACPI: Using GIC for interrupt routing Apr 12 18:28:06.727386 kernel: ACPI: MCFG table detected, 1 entries Apr 12 18:28:06.727394 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 12 18:28:06.727401 kernel: printk: console [ttyAMA0] enabled Apr 12 18:28:06.727407 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:28:06.727574 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:28:06.727645 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 12 18:28:06.727713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 12 18:28:06.727774 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 12 18:28:06.727836 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 12 18:28:06.727844 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 12 18:28:06.727851 kernel: PCI host bridge to bus 0000:00 Apr 12 18:28:06.727915 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 12 18:28:06.727969 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 12 18:28:06.728020 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 12 18:28:06.728072 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:28:06.728143 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 12 18:28:06.728213 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 18:28:06.728274 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 12 18:28:06.728334 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 12 18:28:06.728393 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 12 18:28:06.728464 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 12 18:28:06.728526 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 12 18:28:06.728618 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 12 18:28:06.728691 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 12 18:28:06.728753 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 12 18:28:06.728807 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 12 18:28:06.728816 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 12 18:28:06.728822 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 12 18:28:06.728829 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 12 18:28:06.728838 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 12 18:28:06.728845 kernel: iommu: Default domain type: Translated Apr 12 18:28:06.728852 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 12 18:28:06.728858 kernel: vgaarb: loaded Apr 12 18:28:06.728865 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:28:06.728871 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:28:06.728878 kernel: PTP clock support registered Apr 12 18:28:06.728885 kernel: Registered efivars operations Apr 12 18:28:06.728891 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 12 18:28:06.728899 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:28:06.728906 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:28:06.728912 kernel: pnp: PnP ACPI init Apr 12 18:28:06.728981 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 12 18:28:06.728991 kernel: pnp: PnP ACPI: found 1 devices Apr 12 18:28:06.728997 kernel: NET: Registered PF_INET protocol family Apr 12 18:28:06.729004 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:28:06.729011 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:28:06.729018 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:28:06.729026 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:28:06.729033 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:28:06.729039 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:28:06.729046 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:28:06.729052 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:28:06.729059 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:28:06.729065 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:28:06.729072 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 12 18:28:06.729080 kernel: kvm [1]: HYP mode not available Apr 12 18:28:06.729087 kernel: Initialise system trusted keyrings Apr 12 18:28:06.729093 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:28:06.729100 kernel: Key type asymmetric registered Apr 12 18:28:06.729106 kernel: Asymmetric key parser 'x509' registered Apr 12 18:28:06.729113 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:28:06.729120 kernel: io scheduler mq-deadline registered Apr 12 18:28:06.729127 kernel: io scheduler kyber registered Apr 12 18:28:06.729134 kernel: io scheduler bfq registered Apr 12 18:28:06.729141 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 12 18:28:06.729149 kernel: ACPI: button: Power Button [PWRB] Apr 12 18:28:06.729156 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 12 18:28:06.729217 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 12 18:28:06.729226 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:28:06.729233 kernel: thunder_xcv, ver 1.0 Apr 12 18:28:06.729240 kernel: thunder_bgx, ver 1.0 Apr 12 18:28:06.729247 kernel: nicpf, ver 1.0 Apr 12 18:28:06.729253 kernel: nicvf, ver 1.0 Apr 12 18:28:06.729322 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 12 18:28:06.729382 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-04-12T18:28:06 UTC (1712946486) Apr 12 18:28:06.729391 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 12 18:28:06.729397 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:28:06.729404 kernel: Segment Routing with IPv6 Apr 12 18:28:06.729410 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:28:06.729417 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:28:06.729423 kernel: Key type dns_resolver registered Apr 12 18:28:06.729429 kernel: registered taskstats version 1 Apr 12 18:28:06.729445 kernel: Loading compiled-in X.509 certificates Apr 12 18:28:06.729452 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 8c258d82bbd8df4a9da2c0ea4108142f04be6b34' Apr 12 18:28:06.729459 kernel: Key type .fscrypt registered Apr 12 18:28:06.729466 kernel: Key type fscrypt-provisioning registered Apr 12 18:28:06.729472 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:28:06.729479 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:28:06.729486 kernel: ima: No architecture policies found Apr 12 18:28:06.729492 kernel: Freeing unused kernel memory: 36352K Apr 12 18:28:06.729500 kernel: Run /init as init process Apr 12 18:28:06.729507 kernel: with arguments: Apr 12 18:28:06.729513 kernel: /init Apr 12 18:28:06.729520 kernel: with environment: Apr 12 18:28:06.729526 kernel: HOME=/ Apr 12 18:28:06.729533 kernel: TERM=linux Apr 12 18:28:06.729540 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:28:06.729549 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:28:06.729558 systemd[1]: Detected virtualization kvm. Apr 12 18:28:06.729567 systemd[1]: Detected architecture arm64. Apr 12 18:28:06.729583 systemd[1]: Running in initrd. Apr 12 18:28:06.729590 systemd[1]: No hostname configured, using default hostname. Apr 12 18:28:06.729597 systemd[1]: Hostname set to . Apr 12 18:28:06.729604 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:28:06.729611 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:28:06.729618 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:28:06.729625 systemd[1]: Reached target cryptsetup.target. Apr 12 18:28:06.729633 systemd[1]: Reached target paths.target. Apr 12 18:28:06.729640 systemd[1]: Reached target slices.target. Apr 12 18:28:06.729646 systemd[1]: Reached target swap.target. Apr 12 18:28:06.729653 systemd[1]: Reached target timers.target. Apr 12 18:28:06.729661 systemd[1]: Listening on iscsid.socket. Apr 12 18:28:06.729668 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:28:06.729674 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:28:06.729683 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:28:06.729689 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:28:06.729701 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:28:06.729710 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:28:06.729717 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:28:06.729723 systemd[1]: Reached target sockets.target. Apr 12 18:28:06.729730 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:28:06.729737 systemd[1]: Finished network-cleanup.service. Apr 12 18:28:06.729744 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:28:06.729753 systemd[1]: Starting systemd-journald.service... Apr 12 18:28:06.729760 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:28:06.729767 systemd[1]: Starting systemd-resolved.service... Apr 12 18:28:06.729773 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:28:06.729780 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:28:06.729787 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:28:06.729794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:28:06.729801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:28:06.729808 kernel: audit: type=1130 audit(1712946486.727:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.729817 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:28:06.729827 systemd-journald[250]: Journal started Apr 12 18:28:06.729871 systemd-journald[250]: Runtime Journal (/run/log/journal/e764fccc8f25477d92069016263e0eba) is 6.0M, max 48.7M, 42.6M free. Apr 12 18:28:06.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.722100 systemd-modules-load[251]: Inserted module 'overlay' Apr 12 18:28:06.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.734452 kernel: audit: type=1130 audit(1712946486.731:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.734472 systemd[1]: Started systemd-journald.service. Apr 12 18:28:06.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.735484 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:28:06.742540 kernel: audit: type=1130 audit(1712946486.734:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.742558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:28:06.744868 systemd-modules-load[251]: Inserted module 'br_netfilter' Apr 12 18:28:06.745607 kernel: Bridge firewalling registered Apr 12 18:28:06.750268 systemd-resolved[252]: Positive Trust Anchors: Apr 12 18:28:06.750284 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:28:06.750313 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:28:06.754362 systemd-resolved[252]: Defaulting to hostname 'linux'. Apr 12 18:28:06.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.755084 systemd[1]: Started systemd-resolved.service. Apr 12 18:28:06.763188 kernel: audit: type=1130 audit(1712946486.756:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.763209 kernel: SCSI subsystem initialized Apr 12 18:28:06.763218 kernel: audit: type=1130 audit(1712946486.760:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.757356 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:28:06.760827 systemd[1]: Reached target nss-lookup.target. Apr 12 18:28:06.764471 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:28:06.769023 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:28:06.769066 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:28:06.769077 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:28:06.771271 systemd-modules-load[251]: Inserted module 'dm_multipath' Apr 12 18:28:06.772409 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:28:06.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.775875 dracut-cmdline[267]: dracut-dracut-053 Apr 12 18:28:06.776858 kernel: audit: type=1130 audit(1712946486.773:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.773828 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:28:06.778573 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:28:06.781185 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:28:06.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.785462 kernel: audit: type=1130 audit(1712946486.782:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.836461 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:28:06.848469 kernel: iscsi: registered transport (tcp) Apr 12 18:28:06.862503 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:28:06.862521 kernel: QLogic iSCSI HBA Driver Apr 12 18:28:06.896303 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:28:06.899498 kernel: audit: type=1130 audit(1712946486.896:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:06.897804 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:28:06.943466 kernel: raid6: neonx8 gen() 13807 MB/s Apr 12 18:28:06.960449 kernel: raid6: neonx8 xor() 10825 MB/s Apr 12 18:28:06.977450 kernel: raid6: neonx4 gen() 13562 MB/s Apr 12 18:28:06.994454 kernel: raid6: neonx4 xor() 11252 MB/s Apr 12 18:28:07.011451 kernel: raid6: neonx2 gen() 12949 MB/s Apr 12 18:28:07.028450 kernel: raid6: neonx2 xor() 10486 MB/s Apr 12 18:28:07.045464 kernel: raid6: neonx1 gen() 10536 MB/s Apr 12 18:28:07.062459 kernel: raid6: neonx1 xor() 8773 MB/s Apr 12 18:28:07.079456 kernel: raid6: int64x8 gen() 6266 MB/s Apr 12 18:28:07.096457 kernel: raid6: int64x8 xor() 3542 MB/s Apr 12 18:28:07.113456 kernel: raid6: int64x4 gen() 7217 MB/s Apr 12 18:28:07.130452 kernel: raid6: int64x4 xor() 3851 MB/s Apr 12 18:28:07.147456 kernel: raid6: int64x2 gen() 6146 MB/s Apr 12 18:28:07.164455 kernel: raid6: int64x2 xor() 3318 MB/s Apr 12 18:28:07.181465 kernel: raid6: int64x1 gen() 5037 MB/s Apr 12 18:28:07.198865 kernel: raid6: int64x1 xor() 2644 MB/s Apr 12 18:28:07.198882 kernel: raid6: using algorithm neonx8 gen() 13807 MB/s Apr 12 18:28:07.198890 kernel: raid6: .... xor() 10825 MB/s, rmw enabled Apr 12 18:28:07.198898 kernel: raid6: using neon recovery algorithm Apr 12 18:28:07.209461 kernel: xor: measuring software checksum speed Apr 12 18:28:07.210459 kernel: 8regs : 17282 MB/sec Apr 12 18:28:07.211703 kernel: 32regs : 20749 MB/sec Apr 12 18:28:07.211715 kernel: arm64_neon : 27939 MB/sec Apr 12 18:28:07.211723 kernel: xor: using function: arm64_neon (27939 MB/sec) Apr 12 18:28:07.266463 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Apr 12 18:28:07.276226 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:28:07.279503 kernel: audit: type=1130 audit(1712946487.276:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:07.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:07.278000 audit: BPF prog-id=7 op=LOAD Apr 12 18:28:07.279000 audit: BPF prog-id=8 op=LOAD Apr 12 18:28:07.279852 systemd[1]: Starting systemd-udevd.service... Apr 12 18:28:07.294019 systemd-udevd[449]: Using default interface naming scheme 'v252'. Apr 12 18:28:07.297482 systemd[1]: Started systemd-udevd.service. Apr 12 18:28:07.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:07.299190 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:28:07.310140 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Apr 12 18:28:07.334820 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:28:07.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:07.336231 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:28:07.369716 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:28:07.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:07.398791 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 12 18:28:07.403058 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:28:07.403096 kernel: GPT:9289727 != 19775487 Apr 12 18:28:07.403105 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:28:07.403630 kernel: GPT:9289727 != 19775487 Apr 12 18:28:07.404844 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:28:07.404873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:28:07.416700 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:28:07.424995 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (505) Apr 12 18:28:07.418595 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:28:07.434306 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:28:07.437548 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:28:07.440789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:28:07.443031 systemd[1]: Starting disk-uuid.service... Apr 12 18:28:07.448553 disk-uuid[520]: Primary Header is updated. Apr 12 18:28:07.448553 disk-uuid[520]: Secondary Entries is updated. Apr 12 18:28:07.448553 disk-uuid[520]: Secondary Header is updated. Apr 12 18:28:07.451455 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:28:07.461460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:28:07.463463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:28:08.465460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:28:08.465528 disk-uuid[521]: The operation has completed successfully. Apr 12 18:28:08.482612 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:28:08.483615 systemd[1]: Finished disk-uuid.service. Apr 12 18:28:08.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.490517 systemd[1]: Starting verity-setup.service... Apr 12 18:28:08.506512 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 12 18:28:08.524871 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:28:08.526871 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:28:08.528825 systemd[1]: Finished verity-setup.service. Apr 12 18:28:08.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.573198 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:28:08.574310 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:28:08.573928 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:28:08.574587 systemd[1]: Starting ignition-setup.service... Apr 12 18:28:08.576594 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:28:08.583012 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:28:08.583050 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:28:08.583060 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:28:08.591819 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:28:08.597812 systemd[1]: Finished ignition-setup.service. Apr 12 18:28:08.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.599159 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:28:08.646786 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:28:08.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.647000 audit: BPF prog-id=9 op=LOAD Apr 12 18:28:08.648689 systemd[1]: Starting systemd-networkd.service... Apr 12 18:28:08.664999 ignition[614]: Ignition 2.14.0 Apr 12 18:28:08.665008 ignition[614]: Stage: fetch-offline Apr 12 18:28:08.665049 ignition[614]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:28:08.665058 ignition[614]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:28:08.665556 ignition[614]: parsed url from cmdline: "" Apr 12 18:28:08.665560 ignition[614]: no config URL provided Apr 12 18:28:08.665565 ignition[614]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:28:08.665572 ignition[614]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:28:08.665590 ignition[614]: op(1): [started] loading QEMU firmware config module Apr 12 18:28:08.665595 ignition[614]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 12 18:28:08.674324 ignition[614]: op(1): [finished] loading QEMU firmware config module Apr 12 18:28:08.674346 ignition[614]: QEMU firmware config was not found. Ignoring... Apr 12 18:28:08.678864 systemd-networkd[698]: lo: Link UP Apr 12 18:28:08.678877 systemd-networkd[698]: lo: Gained carrier Apr 12 18:28:08.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.679509 systemd-networkd[698]: Enumeration completed Apr 12 18:28:08.679582 systemd[1]: Started systemd-networkd.service. Apr 12 18:28:08.679891 systemd-networkd[698]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:28:08.680305 systemd[1]: Reached target network.target. Apr 12 18:28:08.681205 systemd-networkd[698]: eth0: Link UP Apr 12 18:28:08.681209 systemd-networkd[698]: eth0: Gained carrier Apr 12 18:28:08.682142 systemd[1]: Starting iscsiuio.service... Apr 12 18:28:08.690904 systemd[1]: Started iscsiuio.service. Apr 12 18:28:08.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.692223 systemd[1]: Starting iscsid.service... Apr 12 18:28:08.695374 iscsid[704]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:28:08.695374 iscsid[704]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:28:08.695374 iscsid[704]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:28:08.695374 iscsid[704]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:28:08.695374 iscsid[704]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:28:08.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.704735 iscsid[704]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:28:08.698284 systemd[1]: Started iscsid.service. Apr 12 18:28:08.701501 systemd-networkd[698]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:28:08.702137 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:28:08.712151 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:28:08.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.713015 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:28:08.714086 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:28:08.715221 systemd[1]: Reached target remote-fs.target. Apr 12 18:28:08.716973 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:28:08.724156 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:28:08.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.766625 ignition[614]: parsing config with SHA512: c31aeadd4df054e34baadc5e65872e910a5d1a98613ea2a2ec86f48d6a0fd9e7a4a4543f474771c5cd6fb8fe88e69b90ec6f51ab0ca8b43c7c92c42e8642647d Apr 12 18:28:08.805166 unknown[614]: fetched base config from "system" Apr 12 18:28:08.805177 unknown[614]: fetched user config from "qemu" Apr 12 18:28:08.805918 ignition[614]: fetch-offline: fetch-offline passed Apr 12 18:28:08.805979 ignition[614]: Ignition finished successfully Apr 12 18:28:08.807320 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:28:08.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.808515 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 12 18:28:08.809205 systemd[1]: Starting ignition-kargs.service... Apr 12 18:28:08.817736 ignition[720]: Ignition 2.14.0 Apr 12 18:28:08.817750 ignition[720]: Stage: kargs Apr 12 18:28:08.817835 ignition[720]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:28:08.817845 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:28:08.819141 ignition[720]: kargs: kargs passed Apr 12 18:28:08.819185 ignition[720]: Ignition finished successfully Apr 12 18:28:08.821860 systemd[1]: Finished ignition-kargs.service. Apr 12 18:28:08.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.823740 systemd[1]: Starting ignition-disks.service... Apr 12 18:28:08.830227 ignition[726]: Ignition 2.14.0 Apr 12 18:28:08.830237 ignition[726]: Stage: disks Apr 12 18:28:08.830317 ignition[726]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:28:08.830326 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:28:08.832728 systemd[1]: Finished ignition-disks.service. Apr 12 18:28:08.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.831419 ignition[726]: disks: disks passed Apr 12 18:28:08.834177 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:28:08.831475 ignition[726]: Ignition finished successfully Apr 12 18:28:08.835302 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:28:08.836325 systemd[1]: Reached target local-fs.target. Apr 12 18:28:08.837498 systemd[1]: Reached target sysinit.target. Apr 12 18:28:08.838531 systemd[1]: Reached target basic.target. Apr 12 18:28:08.840338 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:28:08.850857 systemd-fsck[735]: ROOT: clean, 612/553520 files, 56018/553472 blocks Apr 12 18:28:08.854058 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:28:08.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.857377 systemd[1]: Mounting sysroot.mount... Apr 12 18:28:08.863452 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:28:08.863859 systemd[1]: Mounted sysroot.mount. Apr 12 18:28:08.864489 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:28:08.866328 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:28:08.867146 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:28:08.867182 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:28:08.867203 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:28:08.868823 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:28:08.870347 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:28:08.874510 initrd-setup-root[745]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:28:08.879005 initrd-setup-root[753]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:28:08.882875 initrd-setup-root[761]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:28:08.886463 initrd-setup-root[769]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:28:08.911948 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:28:08.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.913292 systemd[1]: Starting ignition-mount.service... Apr 12 18:28:08.914559 systemd[1]: Starting sysroot-boot.service... Apr 12 18:28:08.918965 bash[786]: umount: /sysroot/usr/share/oem: not mounted. Apr 12 18:28:08.926512 ignition[788]: INFO : Ignition 2.14.0 Apr 12 18:28:08.926512 ignition[788]: INFO : Stage: mount Apr 12 18:28:08.927948 ignition[788]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:28:08.927948 ignition[788]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:28:08.927948 ignition[788]: INFO : mount: mount passed Apr 12 18:28:08.927948 ignition[788]: INFO : Ignition finished successfully Apr 12 18:28:08.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.929735 systemd[1]: Finished ignition-mount.service. Apr 12 18:28:08.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:08.932787 systemd[1]: Finished sysroot-boot.service. Apr 12 18:28:09.534623 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:28:09.540491 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Apr 12 18:28:09.542713 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:28:09.542728 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:28:09.542737 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:28:09.545469 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:28:09.546879 systemd[1]: Starting ignition-files.service... Apr 12 18:28:09.560399 ignition[817]: INFO : Ignition 2.14.0 Apr 12 18:28:09.560399 ignition[817]: INFO : Stage: files Apr 12 18:28:09.561738 ignition[817]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:28:09.561738 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:28:09.561738 ignition[817]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:28:09.566649 ignition[817]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:28:09.566649 ignition[817]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:28:09.570218 ignition[817]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:28:09.571464 ignition[817]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:28:09.571464 ignition[817]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:28:09.570907 unknown[817]: wrote ssh authorized keys file for user: core Apr 12 18:28:09.574863 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:28:09.574863 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 12 18:28:09.814507 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:28:09.870460 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:28:09.872223 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:28:09.872223 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Apr 12 18:28:10.159618 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:28:10.338620 systemd-networkd[698]: eth0: Gained IPv6LL Apr 12 18:28:10.397897 ignition[817]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Apr 12 18:28:10.400298 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:28:10.400298 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:28:10.400298 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Apr 12 18:28:10.628704 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:28:10.967591 ignition[817]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Apr 12 18:28:10.967591 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:28:10.971567 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 12 18:28:10.971567 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 12 18:28:10.971567 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:28:10.971567 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Apr 12 18:28:11.027756 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:28:11.283433 ignition[817]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Apr 12 18:28:11.283433 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:28:11.287183 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:28:11.287183 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Apr 12 18:28:11.305825 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:28:11.576956 ignition[817]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Apr 12 18:28:11.576956 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:28:11.580882 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:28:11.580882 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Apr 12 18:28:11.598984 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 18:28:12.154826 ignition[817]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Apr 12 18:28:12.157510 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:28:12.157510 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:28:12.157510 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:28:12.157510 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:28:12.157510 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 12 18:28:12.439112 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 12 18:28:12.482627 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:28:12.482627 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:28:12.485547 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:28:12.485547 ignition[817]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Apr 12 18:28:12.485547 ignition[817]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(13): [started] processing unit "prepare-helm.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(13): op(14): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(13): op(14): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(13): [finished] processing unit "prepare-helm.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(15): [started] processing unit "coreos-metadata.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(15): op(16): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(15): op(16): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(15): [finished] processing unit "coreos-metadata.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(17): [started] processing unit "containerd.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(17): [finished] processing unit "containerd.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:28:12.501665 ignition[817]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:28:12.527221 ignition[817]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:28:12.527221 ignition[817]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:28:12.527221 ignition[817]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:28:12.527221 ignition[817]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:28:12.527221 ignition[817]: INFO : files: op(1d): [started] setting preset to disabled for "coreos-metadata.service" Apr 12 18:28:12.527221 ignition[817]: INFO : files: op(1d): op(1e): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:28:12.537863 ignition[817]: INFO : files: op(1d): op(1e): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:28:12.546488 kernel: kauditd_printk_skb: 23 callbacks suppressed Apr 12 18:28:12.546508 kernel: audit: type=1130 audit(1712946492.540:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.540105 systemd[1]: Finished ignition-files.service. Apr 12 18:28:12.547468 ignition[817]: INFO : files: op(1d): [finished] setting preset to disabled for "coreos-metadata.service" Apr 12 18:28:12.547468 ignition[817]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:28:12.547468 ignition[817]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:28:12.547468 ignition[817]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:28:12.547468 ignition[817]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:28:12.547468 ignition[817]: INFO : files: files passed Apr 12 18:28:12.547468 ignition[817]: INFO : Ignition finished successfully Apr 12 18:28:12.565855 kernel: audit: type=1130 audit(1712946492.548:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.565875 kernel: audit: type=1131 audit(1712946492.554:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.565885 kernel: audit: type=1130 audit(1712946492.561:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.541855 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:28:12.542575 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:28:12.568726 initrd-setup-root-after-ignition[843]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Apr 12 18:28:12.543192 systemd[1]: Starting ignition-quench.service... Apr 12 18:28:12.570653 initrd-setup-root-after-ignition[845]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:28:12.547521 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:28:12.547599 systemd[1]: Finished ignition-quench.service. Apr 12 18:28:12.560067 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:28:12.562010 systemd[1]: Reached target ignition-complete.target. Apr 12 18:28:12.565838 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:28:12.577181 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:28:12.577262 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:28:12.582986 kernel: audit: type=1130 audit(1712946492.578:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.583004 kernel: audit: type=1131 audit(1712946492.578:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.578712 systemd[1]: Reached target initrd-fs.target. Apr 12 18:28:12.583554 systemd[1]: Reached target initrd.target. Apr 12 18:28:12.584672 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:28:12.585337 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:28:12.595122 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:28:12.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.596491 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:28:12.599358 kernel: audit: type=1130 audit(1712946492.595:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.608936 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:28:12.609711 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:28:12.610498 systemd[1]: Stopped target timers.target. Apr 12 18:28:12.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.611153 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:28:12.618472 kernel: audit: type=1131 audit(1712946492.611:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.611249 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:28:12.612348 systemd[1]: Stopped target initrd.target. Apr 12 18:28:12.613397 systemd[1]: Stopped target basic.target. Apr 12 18:28:12.617029 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:28:12.619189 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:28:12.620354 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:28:12.621506 systemd[1]: Stopped target remote-fs.target. Apr 12 18:28:12.622553 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:28:12.623772 systemd[1]: Stopped target sysinit.target. Apr 12 18:28:12.626132 systemd[1]: Stopped target local-fs.target. Apr 12 18:28:12.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.627255 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:28:12.633929 kernel: audit: type=1131 audit(1712946492.630:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.628326 systemd[1]: Stopped target swap.target. Apr 12 18:28:12.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.629348 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:28:12.640495 kernel: audit: type=1131 audit(1712946492.634:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.629459 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:28:12.630568 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:28:12.633412 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:28:12.633519 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:28:12.634603 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:28:12.634702 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:28:12.637626 systemd[1]: Stopped target paths.target. Apr 12 18:28:12.641069 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:28:12.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.643542 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:28:12.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.644463 systemd[1]: Stopped target slices.target. Apr 12 18:28:12.654767 iscsid[704]: iscsid shutting down. Apr 12 18:28:12.648099 systemd[1]: Stopped target sockets.target. Apr 12 18:28:12.649200 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:28:12.649301 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:28:12.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.650431 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:28:12.661510 ignition[858]: INFO : Ignition 2.14.0 Apr 12 18:28:12.661510 ignition[858]: INFO : Stage: umount Apr 12 18:28:12.661510 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:28:12.661510 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:28:12.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.650538 systemd[1]: Stopped ignition-files.service. Apr 12 18:28:12.667499 ignition[858]: INFO : umount: umount passed Apr 12 18:28:12.667499 ignition[858]: INFO : Ignition finished successfully Apr 12 18:28:12.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.652684 systemd[1]: Stopping ignition-mount.service... Apr 12 18:28:12.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.654329 systemd[1]: Stopping iscsid.service... Apr 12 18:28:12.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.655819 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:28:12.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.657913 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:28:12.658043 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:28:12.659547 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:28:12.659631 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:28:12.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.662098 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:28:12.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.662187 systemd[1]: Stopped iscsid.service. Apr 12 18:28:12.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.663881 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:28:12.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.663943 systemd[1]: Closed iscsid.socket. Apr 12 18:28:12.664882 systemd[1]: Stopping iscsiuio.service... Apr 12 18:28:12.667697 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:28:12.668084 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:28:12.668160 systemd[1]: Stopped iscsiuio.service. Apr 12 18:28:12.669038 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:28:12.669109 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:28:12.670300 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:28:12.670367 systemd[1]: Stopped ignition-mount.service. Apr 12 18:28:12.671412 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:28:12.671491 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:28:12.672994 systemd[1]: Stopped target network.target. Apr 12 18:28:12.674197 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:28:12.674226 systemd[1]: Closed iscsiuio.socket. Apr 12 18:28:12.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.675277 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:28:12.675315 systemd[1]: Stopped ignition-disks.service. Apr 12 18:28:12.676561 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:28:12.676597 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:28:12.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.677661 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:28:12.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.677704 systemd[1]: Stopped ignition-setup.service. Apr 12 18:28:12.678833 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:28:12.678869 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:28:12.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.680014 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:28:12.681324 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:28:12.690637 systemd-networkd[698]: eth0: DHCPv6 lease lost Apr 12 18:28:12.706000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:28:12.691728 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:28:12.707000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:28:12.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.691810 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:28:12.693212 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:28:12.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.693239 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:28:12.694495 systemd[1]: Stopping network-cleanup.service... Apr 12 18:28:12.695667 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:28:12.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.695722 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:28:12.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.696887 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:28:12.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.696927 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:28:12.698561 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:28:12.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.698602 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:28:12.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.699382 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:28:12.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.703327 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:28:12.703758 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:28:12.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:12.703836 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:28:12.706995 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:28:12.707083 systemd[1]: Stopped network-cleanup.service. Apr 12 18:28:12.708671 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:28:12.708791 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:28:12.710083 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:28:12.710116 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:28:12.711137 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:28:12.711167 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:28:12.712236 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:28:12.731000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:28:12.731000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:28:12.731000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:28:12.712276 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:28:12.713356 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:28:12.732000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:28:12.732000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:28:12.713393 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:28:12.714631 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:28:12.714666 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:28:12.716430 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:28:12.717099 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 12 18:28:12.717155 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Apr 12 18:28:12.719060 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:28:12.719097 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:28:12.719831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:28:12.719868 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:28:12.721796 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 12 18:28:12.722163 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:28:12.722241 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:28:12.723408 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:28:12.725233 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:28:12.730492 systemd[1]: Switching root. Apr 12 18:28:12.751550 systemd-journald[250]: Journal stopped Apr 12 18:28:14.893109 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Apr 12 18:28:14.893191 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:28:14.893209 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:28:14.893220 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:28:14.893230 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:28:14.893239 kernel: SELinux: policy capability open_perms=1 Apr 12 18:28:14.893249 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:28:14.893259 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:28:14.893268 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:28:14.893278 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:28:14.893287 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:28:14.893298 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:28:14.893308 systemd[1]: Successfully loaded SELinux policy in 32.340ms. Apr 12 18:28:14.893324 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.522ms. Apr 12 18:28:14.893335 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:28:14.893348 systemd[1]: Detected virtualization kvm. Apr 12 18:28:14.893359 systemd[1]: Detected architecture arm64. Apr 12 18:28:14.893369 systemd[1]: Detected first boot. Apr 12 18:28:14.893381 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:28:14.893392 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:28:14.893404 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:28:14.893416 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:28:14.893427 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:28:14.893460 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:28:14.893472 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:28:14.893483 systemd[1]: Unnecessary job was removed for dev-vda6.device. Apr 12 18:28:14.893494 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:28:14.893506 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:28:14.893516 systemd[1]: Created slice system-getty.slice. Apr 12 18:28:14.893526 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:28:14.893536 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:28:14.893547 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:28:14.893557 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:28:14.893567 systemd[1]: Created slice user.slice. Apr 12 18:28:14.893577 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:28:14.893588 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:28:14.893601 systemd[1]: Set up automount boot.automount. Apr 12 18:28:14.893615 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:28:14.893626 systemd[1]: Reached target integritysetup.target. Apr 12 18:28:14.893637 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:28:14.893648 systemd[1]: Reached target remote-fs.target. Apr 12 18:28:14.893658 systemd[1]: Reached target slices.target. Apr 12 18:28:14.893669 systemd[1]: Reached target swap.target. Apr 12 18:28:14.893685 systemd[1]: Reached target torcx.target. Apr 12 18:28:14.893697 systemd[1]: Reached target veritysetup.target. Apr 12 18:28:14.893708 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:28:14.893719 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:28:14.893729 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:28:14.893740 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:28:14.893751 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:28:14.893761 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:28:14.893771 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:28:14.893782 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:28:14.893792 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:28:14.893803 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:28:14.893813 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:28:14.893824 systemd[1]: Mounting media.mount... Apr 12 18:28:14.893836 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:28:14.893846 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:28:14.893856 systemd[1]: Mounting tmp.mount... Apr 12 18:28:14.893867 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:28:14.893878 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:28:14.893889 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:28:14.893900 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:28:14.893912 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:28:14.893922 systemd[1]: Starting modprobe@drm.service... Apr 12 18:28:14.893933 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:28:14.893947 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:28:14.893958 systemd[1]: Starting modprobe@loop.service... Apr 12 18:28:14.893969 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:28:14.893980 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 12 18:28:14.893990 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Apr 12 18:28:14.894002 systemd[1]: Starting systemd-journald.service... Apr 12 18:28:14.894014 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:28:14.894024 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:28:14.894034 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:28:14.894044 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:28:14.894054 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:28:14.894065 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:28:14.894075 systemd[1]: Mounted media.mount. Apr 12 18:28:14.894085 kernel: fuse: init (API version 7.34) Apr 12 18:28:14.894098 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:28:14.894109 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:28:14.894119 systemd[1]: Mounted tmp.mount. Apr 12 18:28:14.894129 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:28:14.894140 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:28:14.894150 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:28:14.894160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:28:14.894183 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:28:14.894194 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:28:14.894205 kernel: loop: module loaded Apr 12 18:28:14.894214 systemd[1]: Finished modprobe@drm.service. Apr 12 18:28:14.894225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:28:14.894236 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:28:14.894247 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:28:14.894257 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:28:14.894267 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:28:14.894278 systemd[1]: Finished modprobe@loop.service. Apr 12 18:28:14.894291 systemd-journald[985]: Journal started Apr 12 18:28:14.894333 systemd-journald[985]: Runtime Journal (/run/log/journal/e764fccc8f25477d92069016263e0eba) is 6.0M, max 48.7M, 42.6M free. Apr 12 18:28:14.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.891000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:28:14.891000 audit[985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc43199a0 a2=4000 a3=1 items=0 ppid=1 pid=985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:28:14.891000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:28:14.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.898384 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:28:14.898428 systemd[1]: Started systemd-journald.service. Apr 12 18:28:14.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.900332 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:28:14.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.901570 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:28:14.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.902866 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:28:14.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.903922 systemd[1]: Reached target network-pre.target. Apr 12 18:28:14.905655 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:28:14.907419 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:28:14.908105 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:28:14.911268 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:28:14.913362 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:28:14.914240 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:28:14.915335 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:28:14.921793 systemd-journald[985]: Time spent on flushing to /var/log/journal/e764fccc8f25477d92069016263e0eba is 12.613ms for 970 entries. Apr 12 18:28:14.921793 systemd-journald[985]: System Journal (/var/log/journal/e764fccc8f25477d92069016263e0eba) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:28:14.955710 systemd-journald[985]: Received client request to flush runtime journal. Apr 12 18:28:14.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.916124 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:28:14.917153 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:28:14.919138 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:28:14.922707 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:28:14.956622 udevadm[1038]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 12 18:28:14.923718 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:28:14.924567 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:28:14.926378 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:28:14.929380 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:28:14.930450 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:28:14.936419 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:28:14.951707 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:28:14.953752 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:28:14.956691 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:28:14.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:14.970301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:28:14.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.284152 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:28:15.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.286147 systemd[1]: Starting systemd-udevd.service... Apr 12 18:28:15.305512 systemd-udevd[1049]: Using default interface naming scheme 'v252'. Apr 12 18:28:15.316413 systemd[1]: Started systemd-udevd.service. Apr 12 18:28:15.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.319143 systemd[1]: Starting systemd-networkd.service... Apr 12 18:28:15.324216 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:28:15.334848 systemd[1]: Found device dev-ttyAMA0.device. Apr 12 18:28:15.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.355633 systemd[1]: Started systemd-userdbd.service. Apr 12 18:28:15.379551 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:28:15.426037 systemd-networkd[1059]: lo: Link UP Apr 12 18:28:15.426321 systemd-networkd[1059]: lo: Gained carrier Apr 12 18:28:15.426760 systemd-networkd[1059]: Enumeration completed Apr 12 18:28:15.426941 systemd[1]: Started systemd-networkd.service. Apr 12 18:28:15.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.427876 systemd-networkd[1059]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:28:15.430836 systemd-networkd[1059]: eth0: Link UP Apr 12 18:28:15.430842 systemd-networkd[1059]: eth0: Gained carrier Apr 12 18:28:15.440867 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:28:15.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.442893 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:28:15.450455 systemd-networkd[1059]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:28:15.452388 lvm[1083]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:28:15.483308 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:28:15.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.484206 systemd[1]: Reached target cryptsetup.target. Apr 12 18:28:15.486084 systemd[1]: Starting lvm2-activation.service... Apr 12 18:28:15.489543 lvm[1086]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:28:15.521253 systemd[1]: Finished lvm2-activation.service. Apr 12 18:28:15.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.522100 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:28:15.522850 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:28:15.522882 systemd[1]: Reached target local-fs.target. Apr 12 18:28:15.523540 systemd[1]: Reached target machines.target. Apr 12 18:28:15.525361 systemd[1]: Starting ldconfig.service... Apr 12 18:28:15.526336 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:28:15.526387 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:28:15.527423 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:28:15.529183 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:28:15.531342 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:28:15.532302 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:28:15.532343 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:28:15.533336 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:28:15.536538 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1089 (bootctl) Apr 12 18:28:15.537527 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:28:15.544882 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:28:15.545978 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:28:15.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.547163 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:28:15.548374 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:28:15.627822 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:28:15.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.641355 systemd-fsck[1098]: fsck.fat 4.2 (2021-01-31) Apr 12 18:28:15.641355 systemd-fsck[1098]: /dev/vda1: 236 files, 117047/258078 clusters Apr 12 18:28:15.643294 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:28:15.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.719459 ldconfig[1088]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:28:15.722682 systemd[1]: Finished ldconfig.service. Apr 12 18:28:15.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.863492 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:28:15.865034 systemd[1]: Mounting boot.mount... Apr 12 18:28:15.872001 systemd[1]: Mounted boot.mount. Apr 12 18:28:15.878903 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:28:15.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.933453 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:28:15.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.935597 systemd[1]: Starting audit-rules.service... Apr 12 18:28:15.937323 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:28:15.939119 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:28:15.941352 systemd[1]: Starting systemd-resolved.service... Apr 12 18:28:15.943606 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:28:15.945465 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:28:15.947346 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:28:15.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.948553 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:28:15.952000 audit[1117]: SYSTEM_BOOT pid=1117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.954922 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:28:15.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.960929 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:28:15.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.962972 systemd[1]: Starting systemd-update-done.service... Apr 12 18:28:15.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:15.974593 systemd[1]: Finished systemd-update-done.service. Apr 12 18:28:15.980000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:28:15.980000 audit[1133]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe516b040 a2=420 a3=0 items=0 ppid=1107 pid=1133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:28:15.980000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:28:15.981150 augenrules[1133]: No rules Apr 12 18:28:15.982082 systemd[1]: Finished audit-rules.service. Apr 12 18:28:16.006513 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:28:16.007229 systemd-timesyncd[1113]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 12 18:28:16.007282 systemd-timesyncd[1113]: Initial clock synchronization to Fri 2024-04-12 18:28:15.775094 UTC. Apr 12 18:28:16.007637 systemd[1]: Reached target time-set.target. Apr 12 18:28:16.007713 systemd-resolved[1112]: Positive Trust Anchors: Apr 12 18:28:16.007724 systemd-resolved[1112]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:28:16.007752 systemd-resolved[1112]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:28:16.015366 systemd-resolved[1112]: Defaulting to hostname 'linux'. Apr 12 18:28:16.016724 systemd[1]: Started systemd-resolved.service. Apr 12 18:28:16.017461 systemd[1]: Reached target network.target. Apr 12 18:28:16.018089 systemd[1]: Reached target nss-lookup.target. Apr 12 18:28:16.018768 systemd[1]: Reached target sysinit.target. Apr 12 18:28:16.019488 systemd[1]: Started motdgen.path. Apr 12 18:28:16.020092 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:28:16.021145 systemd[1]: Started logrotate.timer. Apr 12 18:28:16.021860 systemd[1]: Started mdadm.timer. Apr 12 18:28:16.022408 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:28:16.023128 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:28:16.023164 systemd[1]: Reached target paths.target. Apr 12 18:28:16.023781 systemd[1]: Reached target timers.target. Apr 12 18:28:16.024689 systemd[1]: Listening on dbus.socket. Apr 12 18:28:16.026356 systemd[1]: Starting docker.socket... Apr 12 18:28:16.027934 systemd[1]: Listening on sshd.socket. Apr 12 18:28:16.028653 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:28:16.028962 systemd[1]: Listening on docker.socket. Apr 12 18:28:16.029638 systemd[1]: Reached target sockets.target. Apr 12 18:28:16.030269 systemd[1]: Reached target basic.target. Apr 12 18:28:16.031036 systemd[1]: System is tainted: cgroupsv1 Apr 12 18:28:16.031084 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:28:16.031104 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:28:16.032084 systemd[1]: Starting containerd.service... Apr 12 18:28:16.033723 systemd[1]: Starting dbus.service... Apr 12 18:28:16.035322 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:28:16.037249 systemd[1]: Starting extend-filesystems.service... Apr 12 18:28:16.038065 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:28:16.039495 systemd[1]: Starting motdgen.service... Apr 12 18:28:16.040706 jq[1145]: false Apr 12 18:28:16.041182 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:28:16.043254 systemd[1]: Starting prepare-critools.service... Apr 12 18:28:16.045760 systemd[1]: Starting prepare-helm.service... Apr 12 18:28:16.047526 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:28:16.049423 systemd[1]: Starting sshd-keygen.service... Apr 12 18:28:16.053156 systemd[1]: Starting systemd-logind.service... Apr 12 18:28:16.053831 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:28:16.053888 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:28:16.054974 systemd[1]: Starting update-engine.service... Apr 12 18:28:16.056719 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:28:16.061893 extend-filesystems[1146]: Found vda Apr 12 18:28:16.061893 extend-filesystems[1146]: Found vda1 Apr 12 18:28:16.061893 extend-filesystems[1146]: Found vda2 Apr 12 18:28:16.061893 extend-filesystems[1146]: Found vda3 Apr 12 18:28:16.061893 extend-filesystems[1146]: Found usr Apr 12 18:28:16.061893 extend-filesystems[1146]: Found vda4 Apr 12 18:28:16.061893 extend-filesystems[1146]: Found vda6 Apr 12 18:28:16.061893 extend-filesystems[1146]: Found vda7 Apr 12 18:28:16.061893 extend-filesystems[1146]: Found vda9 Apr 12 18:28:16.061893 extend-filesystems[1146]: Checking size of /dev/vda9 Apr 12 18:28:16.059130 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:28:16.059705 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:28:16.061769 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:28:16.063014 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:28:16.096055 tar[1174]: linux-arm64/helm Apr 12 18:28:16.091750 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:28:16.091983 systemd[1]: Finished motdgen.service. Apr 12 18:28:16.096411 jq[1168]: true Apr 12 18:28:16.096571 jq[1181]: true Apr 12 18:28:16.105800 tar[1171]: ./ Apr 12 18:28:16.105800 tar[1171]: ./loopback Apr 12 18:28:16.109554 extend-filesystems[1146]: Resized partition /dev/vda9 Apr 12 18:28:16.109271 dbus-daemon[1144]: [system] SELinux support is enabled Apr 12 18:28:16.110042 systemd[1]: Started dbus.service. Apr 12 18:28:16.114765 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:28:16.114789 systemd[1]: Reached target system-config.target. Apr 12 18:28:16.115519 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:28:16.115541 systemd[1]: Reached target user-config.target. Apr 12 18:28:16.122621 tar[1172]: crictl Apr 12 18:28:16.137562 extend-filesystems[1208]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:28:16.142755 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 12 18:28:16.165914 systemd-logind[1163]: Watching system buttons on /dev/input/event0 (Power Button) Apr 12 18:28:16.166103 systemd-logind[1163]: New seat seat0. Apr 12 18:28:16.167300 systemd[1]: Started systemd-logind.service. Apr 12 18:28:16.184051 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 12 18:28:16.196902 update_engine[1167]: I0412 18:28:16.196614 1167 main.cc:92] Flatcar Update Engine starting Apr 12 18:28:16.204497 extend-filesystems[1208]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 12 18:28:16.204497 extend-filesystems[1208]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:28:16.204497 extend-filesystems[1208]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 12 18:28:16.208600 extend-filesystems[1146]: Resized filesystem in /dev/vda9 Apr 12 18:28:16.211588 bash[1209]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:28:16.205531 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:28:16.211741 env[1175]: time="2024-04-12T18:28:16.205555120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:28:16.205777 systemd[1]: Finished extend-filesystems.service. Apr 12 18:28:16.207641 systemd[1]: Started update-engine.service. Apr 12 18:28:16.209601 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:28:16.214985 tar[1171]: ./bandwidth Apr 12 18:28:16.213982 systemd[1]: Started locksmithd.service. Apr 12 18:28:16.224529 update_engine[1167]: I0412 18:28:16.224490 1167 update_check_scheduler.cc:74] Next update check in 7m49s Apr 12 18:28:16.257863 tar[1171]: ./ptp Apr 12 18:28:16.287405 tar[1171]: ./vlan Apr 12 18:28:16.317968 tar[1171]: ./host-device Apr 12 18:28:16.322601 env[1175]: time="2024-04-12T18:28:16.322554600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:28:16.322731 env[1175]: time="2024-04-12T18:28:16.322711560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:28:16.323860 env[1175]: time="2024-04-12T18:28:16.323826320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:28:16.323908 env[1175]: time="2024-04-12T18:28:16.323859560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:28:16.324113 env[1175]: time="2024-04-12T18:28:16.324092480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:28:16.324163 env[1175]: time="2024-04-12T18:28:16.324112120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:28:16.324163 env[1175]: time="2024-04-12T18:28:16.324126080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:28:16.324163 env[1175]: time="2024-04-12T18:28:16.324135560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:28:16.324220 env[1175]: time="2024-04-12T18:28:16.324208560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:28:16.324508 env[1175]: time="2024-04-12T18:28:16.324485720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:28:16.324652 env[1175]: time="2024-04-12T18:28:16.324631560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:28:16.324703 env[1175]: time="2024-04-12T18:28:16.324650960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:28:16.324728 env[1175]: time="2024-04-12T18:28:16.324718840Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:28:16.324750 env[1175]: time="2024-04-12T18:28:16.324731520Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:28:16.334465 env[1175]: time="2024-04-12T18:28:16.334427120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:28:16.334465 env[1175]: time="2024-04-12T18:28:16.334466440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:28:16.334549 env[1175]: time="2024-04-12T18:28:16.334479800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:28:16.334549 env[1175]: time="2024-04-12T18:28:16.334511280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.334549 env[1175]: time="2024-04-12T18:28:16.334527760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.334549 env[1175]: time="2024-04-12T18:28:16.334541960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.334656 env[1175]: time="2024-04-12T18:28:16.334553960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.334912 env[1175]: time="2024-04-12T18:28:16.334889680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.334952 env[1175]: time="2024-04-12T18:28:16.334916480Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.334952 env[1175]: time="2024-04-12T18:28:16.334930600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.334952 env[1175]: time="2024-04-12T18:28:16.334944760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.335011 env[1175]: time="2024-04-12T18:28:16.334957560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:28:16.335098 env[1175]: time="2024-04-12T18:28:16.335077520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:28:16.335185 env[1175]: time="2024-04-12T18:28:16.335168120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:28:16.335491 env[1175]: time="2024-04-12T18:28:16.335470880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:28:16.335524 env[1175]: time="2024-04-12T18:28:16.335501680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335524 env[1175]: time="2024-04-12T18:28:16.335518200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:28:16.335639 env[1175]: time="2024-04-12T18:28:16.335624840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335675 env[1175]: time="2024-04-12T18:28:16.335641680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335675 env[1175]: time="2024-04-12T18:28:16.335654760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335732 env[1175]: time="2024-04-12T18:28:16.335666080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335732 env[1175]: time="2024-04-12T18:28:16.335688640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335732 env[1175]: time="2024-04-12T18:28:16.335700680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335732 env[1175]: time="2024-04-12T18:28:16.335710760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335732 env[1175]: time="2024-04-12T18:28:16.335721200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335864 env[1175]: time="2024-04-12T18:28:16.335733880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:28:16.335864 env[1175]: time="2024-04-12T18:28:16.335850480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335904 env[1175]: time="2024-04-12T18:28:16.335866360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335904 env[1175]: time="2024-04-12T18:28:16.335878360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.335904 env[1175]: time="2024-04-12T18:28:16.335889640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:28:16.335960 env[1175]: time="2024-04-12T18:28:16.335902640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:28:16.335960 env[1175]: time="2024-04-12T18:28:16.335914560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:28:16.335960 env[1175]: time="2024-04-12T18:28:16.335931000Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:28:16.336016 env[1175]: time="2024-04-12T18:28:16.335963480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:28:16.336219 env[1175]: time="2024-04-12T18:28:16.336165360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:28:16.336824 env[1175]: time="2024-04-12T18:28:16.336223920Z" level=info msg="Connect containerd service" Apr 12 18:28:16.336824 env[1175]: time="2024-04-12T18:28:16.336257480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:28:16.336880 env[1175]: time="2024-04-12T18:28:16.336819640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:28:16.337123 env[1175]: time="2024-04-12T18:28:16.337104200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:28:16.337160 env[1175]: time="2024-04-12T18:28:16.337143920Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:28:16.337198 env[1175]: time="2024-04-12T18:28:16.337184880Z" level=info msg="containerd successfully booted in 0.139409s" Apr 12 18:28:16.337296 systemd[1]: Started containerd.service. Apr 12 18:28:16.337498 env[1175]: time="2024-04-12T18:28:16.337471520Z" level=info msg="Start subscribing containerd event" Apr 12 18:28:16.337533 env[1175]: time="2024-04-12T18:28:16.337512800Z" level=info msg="Start recovering state" Apr 12 18:28:16.337585 env[1175]: time="2024-04-12T18:28:16.337572440Z" level=info msg="Start event monitor" Apr 12 18:28:16.337612 env[1175]: time="2024-04-12T18:28:16.337593520Z" level=info msg="Start snapshots syncer" Apr 12 18:28:16.337612 env[1175]: time="2024-04-12T18:28:16.337604680Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:28:16.337661 env[1175]: time="2024-04-12T18:28:16.337611560Z" level=info msg="Start streaming server" Apr 12 18:28:16.355977 tar[1171]: ./tuning Apr 12 18:28:16.411249 tar[1171]: ./vrf Apr 12 18:28:16.454599 tar[1171]: ./sbr Apr 12 18:28:16.495045 tar[1171]: ./tap Apr 12 18:28:16.540312 systemd[1]: Finished prepare-critools.service. Apr 12 18:28:16.547723 tar[1171]: ./dhcp Apr 12 18:28:16.586041 tar[1174]: linux-arm64/LICENSE Apr 12 18:28:16.586136 tar[1174]: linux-arm64/README.md Apr 12 18:28:16.590480 systemd[1]: Finished prepare-helm.service. Apr 12 18:28:16.592748 locksmithd[1215]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:28:16.631928 tar[1171]: ./static Apr 12 18:28:16.653201 tar[1171]: ./firewall Apr 12 18:28:16.674625 systemd-networkd[1059]: eth0: Gained IPv6LL Apr 12 18:28:16.686344 tar[1171]: ./macvlan Apr 12 18:28:16.715542 tar[1171]: ./dummy Apr 12 18:28:16.744063 tar[1171]: ./bridge Apr 12 18:28:16.775104 tar[1171]: ./ipvlan Apr 12 18:28:16.803610 tar[1171]: ./portmap Apr 12 18:28:16.830725 tar[1171]: ./host-local Apr 12 18:28:16.866012 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:28:17.421855 sshd_keygen[1178]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:28:17.438761 systemd[1]: Finished sshd-keygen.service. Apr 12 18:28:17.440870 systemd[1]: Starting issuegen.service... Apr 12 18:28:17.445242 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:28:17.445498 systemd[1]: Finished issuegen.service. Apr 12 18:28:17.447417 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:28:17.452533 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:28:17.454588 systemd[1]: Started getty@tty1.service. Apr 12 18:28:17.456320 systemd[1]: Started serial-getty@ttyAMA0.service. Apr 12 18:28:17.457218 systemd[1]: Reached target getty.target. Apr 12 18:28:17.457930 systemd[1]: Reached target multi-user.target. Apr 12 18:28:17.459796 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:28:17.465214 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:28:17.465407 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:28:17.466287 systemd[1]: Startup finished in 6.790s (kernel) + 4.655s (userspace) = 11.446s. Apr 12 18:28:19.074165 systemd[1]: Created slice system-sshd.slice. Apr 12 18:28:19.075320 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:43740.service. Apr 12 18:28:19.130847 sshd[1255]: Accepted publickey for core from 10.0.0.1 port 43740 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:28:19.132826 sshd[1255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:28:19.140140 systemd[1]: Created slice user-500.slice. Apr 12 18:28:19.141012 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:28:19.144122 systemd-logind[1163]: New session 1 of user core. Apr 12 18:28:19.148426 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:28:19.149570 systemd[1]: Starting user@500.service... Apr 12 18:28:19.152025 (systemd)[1260]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:28:19.206307 systemd[1260]: Queued start job for default target default.target. Apr 12 18:28:19.206514 systemd[1260]: Reached target paths.target. Apr 12 18:28:19.206529 systemd[1260]: Reached target sockets.target. Apr 12 18:28:19.206539 systemd[1260]: Reached target timers.target. Apr 12 18:28:19.206560 systemd[1260]: Reached target basic.target. Apr 12 18:28:19.206599 systemd[1260]: Reached target default.target. Apr 12 18:28:19.206621 systemd[1260]: Startup finished in 49ms. Apr 12 18:28:19.206813 systemd[1]: Started user@500.service. Apr 12 18:28:19.207679 systemd[1]: Started session-1.scope. Apr 12 18:28:19.254970 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:46914.service. Apr 12 18:28:19.297029 sshd[1269]: Accepted publickey for core from 10.0.0.1 port 46914 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:28:19.298103 sshd[1269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:28:19.301653 systemd-logind[1163]: New session 2 of user core. Apr 12 18:28:19.301974 systemd[1]: Started session-2.scope. Apr 12 18:28:19.357849 sshd[1269]: pam_unix(sshd:session): session closed for user core Apr 12 18:28:19.359805 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:46920.service. Apr 12 18:28:19.360222 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:46914.service: Deactivated successfully. Apr 12 18:28:19.361070 systemd-logind[1163]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:28:19.361092 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:28:19.361938 systemd-logind[1163]: Removed session 2. Apr 12 18:28:19.403034 sshd[1274]: Accepted publickey for core from 10.0.0.1 port 46920 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:28:19.404013 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:28:19.407510 systemd[1]: Started session-3.scope. Apr 12 18:28:19.407774 systemd-logind[1163]: New session 3 of user core. Apr 12 18:28:19.455445 sshd[1274]: pam_unix(sshd:session): session closed for user core Apr 12 18:28:19.457506 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:46926.service. Apr 12 18:28:19.457951 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:46920.service: Deactivated successfully. Apr 12 18:28:19.458763 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:28:19.458780 systemd-logind[1163]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:28:19.459782 systemd-logind[1163]: Removed session 3. Apr 12 18:28:19.499278 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 46926 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:28:19.500615 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:28:19.503487 systemd-logind[1163]: New session 4 of user core. Apr 12 18:28:19.504207 systemd[1]: Started session-4.scope. Apr 12 18:28:19.554947 sshd[1281]: pam_unix(sshd:session): session closed for user core Apr 12 18:28:19.556954 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:46928.service. Apr 12 18:28:19.557375 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:46926.service: Deactivated successfully. Apr 12 18:28:19.558474 systemd-logind[1163]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:28:19.558722 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:28:19.559649 systemd-logind[1163]: Removed session 4. Apr 12 18:28:19.599807 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 46928 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:28:19.600845 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:28:19.603891 systemd-logind[1163]: New session 5 of user core. Apr 12 18:28:19.605761 systemd[1]: Started session-5.scope. Apr 12 18:28:19.664096 sudo[1294]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:28:19.664287 sudo[1294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:28:20.403700 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:28:20.409289 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:28:20.409563 systemd[1]: Reached target network-online.target. Apr 12 18:28:20.410750 systemd[1]: Starting docker.service... Apr 12 18:28:20.490387 env[1313]: time="2024-04-12T18:28:20.490332906Z" level=info msg="Starting up" Apr 12 18:28:20.491768 env[1313]: time="2024-04-12T18:28:20.491746884Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:28:20.491768 env[1313]: time="2024-04-12T18:28:20.491765358Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:28:20.491862 env[1313]: time="2024-04-12T18:28:20.491782381Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:28:20.491862 env[1313]: time="2024-04-12T18:28:20.491792069Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:28:20.493879 env[1313]: time="2024-04-12T18:28:20.493854672Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:28:20.493879 env[1313]: time="2024-04-12T18:28:20.493877775Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:28:20.493974 env[1313]: time="2024-04-12T18:28:20.493893464Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:28:20.493974 env[1313]: time="2024-04-12T18:28:20.493903269Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:28:20.497753 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport729609901-merged.mount: Deactivated successfully. Apr 12 18:28:20.712087 env[1313]: time="2024-04-12T18:28:20.712052845Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 12 18:28:20.712266 env[1313]: time="2024-04-12T18:28:20.712251547Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 12 18:28:20.712472 env[1313]: time="2024-04-12T18:28:20.712456956Z" level=info msg="Loading containers: start." Apr 12 18:28:20.816812 kernel: Initializing XFRM netlink socket Apr 12 18:28:20.838568 env[1313]: time="2024-04-12T18:28:20.838532681Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:28:20.888184 systemd-networkd[1059]: docker0: Link UP Apr 12 18:28:20.895182 env[1313]: time="2024-04-12T18:28:20.895153025Z" level=info msg="Loading containers: done." Apr 12 18:28:20.917587 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2484143559-merged.mount: Deactivated successfully. Apr 12 18:28:20.920970 env[1313]: time="2024-04-12T18:28:20.920927666Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:28:20.921095 env[1313]: time="2024-04-12T18:28:20.921077183Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:28:20.921185 env[1313]: time="2024-04-12T18:28:20.921169866Z" level=info msg="Daemon has completed initialization" Apr 12 18:28:20.932953 systemd[1]: Started docker.service. Apr 12 18:28:20.941989 env[1313]: time="2024-04-12T18:28:20.941948360Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:28:20.957256 systemd[1]: Reloading. Apr 12 18:28:20.994840 /usr/lib/systemd/system-generators/torcx-generator[1456]: time="2024-04-12T18:28:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:28:20.995718 /usr/lib/systemd/system-generators/torcx-generator[1456]: time="2024-04-12T18:28:20Z" level=info msg="torcx already run" Apr 12 18:28:21.051248 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:28:21.051268 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:28:21.067657 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:28:21.119701 systemd[1]: Started kubelet.service. Apr 12 18:28:21.255002 kubelet[1499]: E0412 18:28:21.254875 1499 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:28:21.257274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:28:21.257454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:28:21.618353 env[1175]: time="2024-04-12T18:28:21.618248323Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\"" Apr 12 18:28:22.216369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374161933.mount: Deactivated successfully. Apr 12 18:28:23.670571 env[1175]: time="2024-04-12T18:28:23.670496437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:23.672458 env[1175]: time="2024-04-12T18:28:23.672400561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d4d4d261fc80c6c87ea30cb7d2b1a53b684be80fb7af5e16a2c97371e669f19f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:23.673363 env[1175]: time="2024-04-12T18:28:23.673331127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:23.675535 env[1175]: time="2024-04-12T18:28:23.675502486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cf0c29f585316888225cf254949988bdbedc7ba6238bc9a24bf6f0c508c42b6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:23.676237 env[1175]: time="2024-04-12T18:28:23.676208652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\" returns image reference \"sha256:d4d4d261fc80c6c87ea30cb7d2b1a53b684be80fb7af5e16a2c97371e669f19f\"" Apr 12 18:28:23.685411 env[1175]: time="2024-04-12T18:28:23.685377000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\"" Apr 12 18:28:25.254699 env[1175]: time="2024-04-12T18:28:25.254625669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:25.259405 env[1175]: time="2024-04-12T18:28:25.259350026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a4a7509f59f7f027d7c434948b3b8e5463b835d28675c76c6d1ff21d2c4e8f18,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:25.261246 env[1175]: time="2024-04-12T18:28:25.261202140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:25.263243 env[1175]: time="2024-04-12T18:28:25.263215193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6caa3a4278e87169371d031861e49db21742bcbd8df650d7fe519a1a7f6764af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:25.264045 env[1175]: time="2024-04-12T18:28:25.264013674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\" returns image reference \"sha256:a4a7509f59f7f027d7c434948b3b8e5463b835d28675c76c6d1ff21d2c4e8f18\"" Apr 12 18:28:25.273241 env[1175]: time="2024-04-12T18:28:25.273209613Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\"" Apr 12 18:28:26.360381 env[1175]: time="2024-04-12T18:28:26.360329903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:26.361496 env[1175]: time="2024-04-12T18:28:26.361470350Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5de6108d9220f19bcc35bf81a2879e5ff2f6506c08af260c116b803579db675b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:26.363009 env[1175]: time="2024-04-12T18:28:26.362983041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:26.365043 env[1175]: time="2024-04-12T18:28:26.365007192Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b8bb7b17a4f915419575ceb885e128d0bb5ea8e67cb88dbde257988b770a4dce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:26.365884 env[1175]: time="2024-04-12T18:28:26.365856441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\" returns image reference \"sha256:5de6108d9220f19bcc35bf81a2879e5ff2f6506c08af260c116b803579db675b\"" Apr 12 18:28:26.375176 env[1175]: time="2024-04-12T18:28:26.375147418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\"" Apr 12 18:28:27.510913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179037425.mount: Deactivated successfully. Apr 12 18:28:27.885401 env[1175]: time="2024-04-12T18:28:27.885288465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:27.886661 env[1175]: time="2024-04-12T18:28:27.886639793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7daec180765068529c26cc4c7c989513bebbe614cbbc58beebe1db17ae177e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:27.887909 env[1175]: time="2024-04-12T18:28:27.887876523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:27.889162 env[1175]: time="2024-04-12T18:28:27.889127066Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b0539f35b586abc54ca7660f9bb8a539d010b9e07d20e9e3d529cf0ca35d4ddf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:27.889614 env[1175]: time="2024-04-12T18:28:27.889591297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\" returns image reference \"sha256:7daec180765068529c26cc4c7c989513bebbe614cbbc58beebe1db17ae177e06\"" Apr 12 18:28:27.898156 env[1175]: time="2024-04-12T18:28:27.898119159Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:28:28.345297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926999198.mount: Deactivated successfully. Apr 12 18:28:28.348465 env[1175]: time="2024-04-12T18:28:28.348420819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:28.349662 env[1175]: time="2024-04-12T18:28:28.349636924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:28.351171 env[1175]: time="2024-04-12T18:28:28.351145463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:28.352349 env[1175]: time="2024-04-12T18:28:28.352309319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:28.353410 env[1175]: time="2024-04-12T18:28:28.353366930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 12 18:28:28.362039 env[1175]: time="2024-04-12T18:28:28.361986945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Apr 12 18:28:29.021885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644303400.mount: Deactivated successfully. Apr 12 18:28:30.996779 env[1175]: time="2024-04-12T18:28:30.996735094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:30.998233 env[1175]: time="2024-04-12T18:28:30.998206453Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:31.000444 env[1175]: time="2024-04-12T18:28:31.000411164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:31.001822 env[1175]: time="2024-04-12T18:28:31.001796203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:31.002545 env[1175]: time="2024-04-12T18:28:31.002516008Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Apr 12 18:28:31.011670 env[1175]: time="2024-04-12T18:28:31.011642201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 18:28:31.449929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:28:31.450087 systemd[1]: Stopped kubelet.service. Apr 12 18:28:31.451728 systemd[1]: Started kubelet.service. Apr 12 18:28:31.490894 kubelet[1561]: E0412 18:28:31.490846 1561 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:28:31.493764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:28:31.493897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:28:31.606794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692492732.mount: Deactivated successfully. Apr 12 18:28:32.127294 env[1175]: time="2024-04-12T18:28:32.127252954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:32.128587 env[1175]: time="2024-04-12T18:28:32.128545554Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:32.129800 env[1175]: time="2024-04-12T18:28:32.129773131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:32.131041 env[1175]: time="2024-04-12T18:28:32.131013218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:32.133909 env[1175]: time="2024-04-12T18:28:32.133872861Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Apr 12 18:28:36.396081 systemd[1]: Stopped kubelet.service. Apr 12 18:28:36.408369 systemd[1]: Reloading. Apr 12 18:28:36.455867 /usr/lib/systemd/system-generators/torcx-generator[1668]: time="2024-04-12T18:28:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:28:36.455895 /usr/lib/systemd/system-generators/torcx-generator[1668]: time="2024-04-12T18:28:36Z" level=info msg="torcx already run" Apr 12 18:28:36.513717 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:28:36.513738 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:28:36.530323 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:28:36.589585 systemd[1]: Started kubelet.service. Apr 12 18:28:36.635329 kubelet[1713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:28:36.635329 kubelet[1713]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:28:36.635329 kubelet[1713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:28:36.635731 kubelet[1713]: I0412 18:28:36.635390 1713 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:28:37.985127 kubelet[1713]: I0412 18:28:37.985087 1713 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:28:37.985127 kubelet[1713]: I0412 18:28:37.985116 1713 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:28:37.985512 kubelet[1713]: I0412 18:28:37.985319 1713 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:28:37.990201 kubelet[1713]: I0412 18:28:37.990177 1713 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:28:37.990267 kubelet[1713]: E0412 18:28:37.990245 1713 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:37.991539 kubelet[1713]: W0412 18:28:37.991528 1713 machine.go:65] Cannot read vendor id correctly, set empty. Apr 12 18:28:37.992191 kubelet[1713]: I0412 18:28:37.992170 1713 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:28:37.992476 kubelet[1713]: I0412 18:28:37.992465 1713 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:28:37.992539 kubelet[1713]: I0412 18:28:37.992529 1713 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:28:37.992616 kubelet[1713]: I0412 18:28:37.992547 1713 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:28:37.992616 kubelet[1713]: I0412 18:28:37.992557 1713 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:28:37.992668 kubelet[1713]: I0412 18:28:37.992634 1713 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:28:37.995612 kubelet[1713]: I0412 18:28:37.995596 1713 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:28:37.995661 kubelet[1713]: I0412 18:28:37.995617 1713 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:28:37.995661 kubelet[1713]: I0412 18:28:37.995640 1713 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:28:37.995661 kubelet[1713]: I0412 18:28:37.995652 1713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:28:37.996255 kubelet[1713]: W0412 18:28:37.996222 1713 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:37.996255 kubelet[1713]: I0412 18:28:37.996250 1713 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:28:37.996318 kubelet[1713]: E0412 18:28:37.996271 1713 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:37.996525 kubelet[1713]: W0412 18:28:37.996497 1713 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:37.996558 kubelet[1713]: E0412 18:28:37.996533 1713 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:37.996631 kubelet[1713]: W0412 18:28:37.996618 1713 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:28:37.999670 kubelet[1713]: I0412 18:28:37.999652 1713 server.go:1168] "Started kubelet" Apr 12 18:28:38.000633 kubelet[1713]: E0412 18:28:38.000521 1713 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:28:38.000720 kubelet[1713]: E0412 18:28:38.000639 1713 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:28:38.001148 kubelet[1713]: I0412 18:28:38.001124 1713 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:28:38.001957 kubelet[1713]: I0412 18:28:38.001938 1713 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:28:38.003232 kubelet[1713]: I0412 18:28:38.003141 1713 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:28:38.003537 kubelet[1713]: E0412 18:28:38.003429 1713 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17c59bc8549c0f92", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 28, 37, 999619986, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 28, 37, 999619986, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.80:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.80:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:28:38.003984 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:28:38.004099 kubelet[1713]: I0412 18:28:38.004076 1713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:28:38.004181 kubelet[1713]: I0412 18:28:38.004165 1713 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:28:38.004348 kubelet[1713]: I0412 18:28:38.004335 1713 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:28:38.004425 kubelet[1713]: E0412 18:28:38.004220 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:38.004857 kubelet[1713]: W0412 18:28:38.004795 1713 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:38.004978 kubelet[1713]: E0412 18:28:38.004965 1713 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:38.005058 kubelet[1713]: E0412 18:28:38.005031 1713 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Apr 12 18:28:38.017744 kubelet[1713]: I0412 18:28:38.017508 1713 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:28:38.018319 kubelet[1713]: I0412 18:28:38.018299 1713 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:28:38.018364 kubelet[1713]: I0412 18:28:38.018323 1713 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:28:38.018364 kubelet[1713]: I0412 18:28:38.018339 1713 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:28:38.018414 kubelet[1713]: E0412 18:28:38.018384 1713 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:28:38.023084 kubelet[1713]: W0412 18:28:38.023049 1713 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:38.023084 kubelet[1713]: E0412 18:28:38.023088 1713 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:38.033912 kubelet[1713]: I0412 18:28:38.033893 1713 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:28:38.033912 kubelet[1713]: I0412 18:28:38.033911 1713 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:28:38.034006 kubelet[1713]: I0412 18:28:38.033927 1713 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:28:38.035596 kubelet[1713]: I0412 18:28:38.035571 1713 policy_none.go:49] "None policy: Start" Apr 12 18:28:38.036144 kubelet[1713]: I0412 18:28:38.036131 1713 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:28:38.036206 kubelet[1713]: I0412 18:28:38.036152 1713 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:28:38.040789 kubelet[1713]: I0412 18:28:38.040752 1713 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:28:38.042261 kubelet[1713]: E0412 18:28:38.042241 1713 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 12 18:28:38.042399 kubelet[1713]: I0412 18:28:38.042381 1713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:28:38.105336 kubelet[1713]: I0412 18:28:38.105314 1713 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:28:38.105771 kubelet[1713]: E0412 18:28:38.105745 1713 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Apr 12 18:28:38.118921 kubelet[1713]: I0412 18:28:38.118903 1713 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:38.119877 kubelet[1713]: I0412 18:28:38.119849 1713 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:38.120586 kubelet[1713]: I0412 18:28:38.120566 1713 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:38.205784 kubelet[1713]: E0412 18:28:38.205748 1713 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Apr 12 18:28:38.305149 kubelet[1713]: I0412 18:28:38.305059 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:38.305149 kubelet[1713]: I0412 18:28:38.305111 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:38.305246 kubelet[1713]: I0412 18:28:38.305177 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db6bd33fd440faff498dc84758ac9399-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"db6bd33fd440faff498dc84758ac9399\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:28:38.305246 kubelet[1713]: I0412 18:28:38.305207 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db6bd33fd440faff498dc84758ac9399-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"db6bd33fd440faff498dc84758ac9399\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:28:38.305246 kubelet[1713]: I0412 18:28:38.305235 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db6bd33fd440faff498dc84758ac9399-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"db6bd33fd440faff498dc84758ac9399\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:28:38.305326 kubelet[1713]: I0412 18:28:38.305271 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:38.305326 kubelet[1713]: I0412 18:28:38.305293 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:38.305379 kubelet[1713]: I0412 18:28:38.305323 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:38.305379 kubelet[1713]: I0412 18:28:38.305355 1713 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7d78630cba827a770c684e2dbe6ce6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2f7d78630cba827a770c684e2dbe6ce6\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:28:38.307027 kubelet[1713]: I0412 18:28:38.307009 1713 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:28:38.307290 kubelet[1713]: E0412 18:28:38.307257 1713 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Apr 12 18:28:38.423064 kubelet[1713]: E0412 18:28:38.423026 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:38.423647 env[1175]: time="2024-04-12T18:28:38.423604566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b23ea803843027eb81926493bf073366,Namespace:kube-system,Attempt:0,}" Apr 12 18:28:38.425083 kubelet[1713]: E0412 18:28:38.425067 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:38.425676 env[1175]: time="2024-04-12T18:28:38.425434706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2f7d78630cba827a770c684e2dbe6ce6,Namespace:kube-system,Attempt:0,}" Apr 12 18:28:38.428685 kubelet[1713]: E0412 18:28:38.428653 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:38.428992 env[1175]: time="2024-04-12T18:28:38.428958066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:db6bd33fd440faff498dc84758ac9399,Namespace:kube-system,Attempt:0,}" Apr 12 18:28:38.606256 kubelet[1713]: E0412 18:28:38.606162 1713 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Apr 12 18:28:38.708488 kubelet[1713]: I0412 18:28:38.708462 1713 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:28:38.708781 kubelet[1713]: E0412 18:28:38.708762 1713 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Apr 12 18:28:38.873822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592909016.mount: Deactivated successfully. Apr 12 18:28:38.878846 env[1175]: time="2024-04-12T18:28:38.878808376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.880255 env[1175]: time="2024-04-12T18:28:38.880226999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.881844 env[1175]: time="2024-04-12T18:28:38.881813887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.882638 kubelet[1713]: W0412 18:28:38.882591 1713 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:38.882638 kubelet[1713]: E0412 18:28:38.882647 1713 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:38.883085 env[1175]: time="2024-04-12T18:28:38.883058417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.884247 env[1175]: time="2024-04-12T18:28:38.884221250Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.885701 env[1175]: time="2024-04-12T18:28:38.885672936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.887312 env[1175]: time="2024-04-12T18:28:38.887281026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.888722 env[1175]: time="2024-04-12T18:28:38.888695737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.891741 env[1175]: time="2024-04-12T18:28:38.891692224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.893011 env[1175]: time="2024-04-12T18:28:38.892978880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.893785 env[1175]: time="2024-04-12T18:28:38.893751041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.895071 env[1175]: time="2024-04-12T18:28:38.895048358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:28:38.915018 kubelet[1713]: W0412 18:28:38.911624 1713 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:38.915018 kubelet[1713]: E0412 18:28:38.911677 1713 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:38.922708 env[1175]: time="2024-04-12T18:28:38.922599715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:28:38.922708 env[1175]: time="2024-04-12T18:28:38.922633815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:28:38.922708 env[1175]: time="2024-04-12T18:28:38.922643997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:28:38.923357 env[1175]: time="2024-04-12T18:28:38.923008276Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ae76690214d6030de0caa774e103ad76386f8c90f03cbfe7a8b05b8cbb67d1b pid=1760 runtime=io.containerd.runc.v2 Apr 12 18:28:38.924197 env[1175]: time="2024-04-12T18:28:38.924106384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:28:38.924197 env[1175]: time="2024-04-12T18:28:38.924138847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:28:38.924197 env[1175]: time="2024-04-12T18:28:38.924148430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:28:38.924484 env[1175]: time="2024-04-12T18:28:38.924427978Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9207a02e1b2f603c48290a84f2eba5063ad1005802de5f73a38161b5f2d593e pid=1775 runtime=io.containerd.runc.v2 Apr 12 18:28:38.926153 env[1175]: time="2024-04-12T18:28:38.926082427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:28:38.926153 env[1175]: time="2024-04-12T18:28:38.926116287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:28:38.926319 env[1175]: time="2024-04-12T18:28:38.926130183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:28:38.926700 env[1175]: time="2024-04-12T18:28:38.926649868Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b13560d47725a8b6ce42c2d55c220d448f8cdb5fe9cbab7c187fedd7a97e870 pid=1768 runtime=io.containerd.runc.v2 Apr 12 18:28:38.998212 env[1175]: time="2024-04-12T18:28:38.995582805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:db6bd33fd440faff498dc84758ac9399,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9207a02e1b2f603c48290a84f2eba5063ad1005802de5f73a38161b5f2d593e\"" Apr 12 18:28:38.998695 kubelet[1713]: E0412 18:28:38.996413 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:38.999932 env[1175]: time="2024-04-12T18:28:38.999781058Z" level=info msg="CreateContainer within sandbox \"a9207a02e1b2f603c48290a84f2eba5063ad1005802de5f73a38161b5f2d593e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:28:39.000022 env[1175]: time="2024-04-12T18:28:39.000001270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2f7d78630cba827a770c684e2dbe6ce6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae76690214d6030de0caa774e103ad76386f8c90f03cbfe7a8b05b8cbb67d1b\"" Apr 12 18:28:39.000513 kubelet[1713]: E0412 18:28:39.000489 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:39.002876 env[1175]: time="2024-04-12T18:28:39.002765336Z" level=info msg="CreateContainer within sandbox \"5ae76690214d6030de0caa774e103ad76386f8c90f03cbfe7a8b05b8cbb67d1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:28:39.010276 env[1175]: time="2024-04-12T18:28:39.010242066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b23ea803843027eb81926493bf073366,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b13560d47725a8b6ce42c2d55c220d448f8cdb5fe9cbab7c187fedd7a97e870\"" Apr 12 18:28:39.011083 kubelet[1713]: E0412 18:28:39.011055 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:39.013059 env[1175]: time="2024-04-12T18:28:39.013022865Z" level=info msg="CreateContainer within sandbox \"5b13560d47725a8b6ce42c2d55c220d448f8cdb5fe9cbab7c187fedd7a97e870\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:28:39.013968 env[1175]: time="2024-04-12T18:28:39.013938775Z" level=info msg="CreateContainer within sandbox \"a9207a02e1b2f603c48290a84f2eba5063ad1005802de5f73a38161b5f2d593e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e49a83723fcb67f2eaded5ea9524a965f415071a80413389bfadb2e399a4f4a0\"" Apr 12 18:28:39.014573 env[1175]: time="2024-04-12T18:28:39.014548117Z" level=info msg="StartContainer for \"e49a83723fcb67f2eaded5ea9524a965f415071a80413389bfadb2e399a4f4a0\"" Apr 12 18:28:39.019566 env[1175]: time="2024-04-12T18:28:39.019517388Z" level=info msg="CreateContainer within sandbox \"5ae76690214d6030de0caa774e103ad76386f8c90f03cbfe7a8b05b8cbb67d1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2d56de7ad847b08542975a2f3a01cfabaf87f20070173b9582b5592edce756e\"" Apr 12 18:28:39.020087 env[1175]: time="2024-04-12T18:28:39.020034911Z" level=info msg="StartContainer for \"e2d56de7ad847b08542975a2f3a01cfabaf87f20070173b9582b5592edce756e\"" Apr 12 18:28:39.028277 env[1175]: time="2024-04-12T18:28:39.028239481Z" level=info msg="CreateContainer within sandbox \"5b13560d47725a8b6ce42c2d55c220d448f8cdb5fe9cbab7c187fedd7a97e870\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4c5ed134ce6235041744723878581c01731d965da722d9e4d34b6c7922efb023\"" Apr 12 18:28:39.028602 env[1175]: time="2024-04-12T18:28:39.028576562Z" level=info msg="StartContainer for \"4c5ed134ce6235041744723878581c01731d965da722d9e4d34b6c7922efb023\"" Apr 12 18:28:39.135714 env[1175]: time="2024-04-12T18:28:39.135623412Z" level=info msg="StartContainer for \"e49a83723fcb67f2eaded5ea9524a965f415071a80413389bfadb2e399a4f4a0\" returns successfully" Apr 12 18:28:39.153089 env[1175]: time="2024-04-12T18:28:39.152990437Z" level=info msg="StartContainer for \"4c5ed134ce6235041744723878581c01731d965da722d9e4d34b6c7922efb023\" returns successfully" Apr 12 18:28:39.153473 env[1175]: time="2024-04-12T18:28:39.153421014Z" level=info msg="StartContainer for \"e2d56de7ad847b08542975a2f3a01cfabaf87f20070173b9582b5592edce756e\" returns successfully" Apr 12 18:28:39.204296 kubelet[1713]: W0412 18:28:39.202251 1713 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:39.204296 kubelet[1713]: E0412 18:28:39.202310 1713 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:39.237031 kubelet[1713]: W0412 18:28:39.236954 1713 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:39.237031 kubelet[1713]: E0412 18:28:39.237021 1713 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Apr 12 18:28:39.510460 kubelet[1713]: I0412 18:28:39.510422 1713 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:28:40.036656 kubelet[1713]: E0412 18:28:40.036624 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:40.037701 kubelet[1713]: E0412 18:28:40.037682 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:40.039737 kubelet[1713]: E0412 18:28:40.039690 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:41.041880 kubelet[1713]: E0412 18:28:41.041852 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:41.042236 kubelet[1713]: E0412 18:28:41.041907 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:41.042854 kubelet[1713]: E0412 18:28:41.042815 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:41.337647 kubelet[1713]: E0412 18:28:41.337553 1713 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 12 18:28:41.391805 kubelet[1713]: I0412 18:28:41.391768 1713 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Apr 12 18:28:41.401024 kubelet[1713]: E0412 18:28:41.400997 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:41.501364 kubelet[1713]: E0412 18:28:41.501323 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:41.601880 kubelet[1713]: E0412 18:28:41.601792 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:41.702425 kubelet[1713]: E0412 18:28:41.702389 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:41.802833 kubelet[1713]: E0412 18:28:41.802799 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:41.903345 kubelet[1713]: E0412 18:28:41.903261 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:42.004095 kubelet[1713]: E0412 18:28:42.004059 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:42.042989 kubelet[1713]: E0412 18:28:42.042948 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:42.104801 kubelet[1713]: E0412 18:28:42.104769 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:42.205167 kubelet[1713]: E0412 18:28:42.205133 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:42.305546 kubelet[1713]: E0412 18:28:42.305507 1713 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:42.348801 kubelet[1713]: E0412 18:28:42.348771 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:42.999326 kubelet[1713]: I0412 18:28:42.999285 1713 apiserver.go:52] "Watching apiserver" Apr 12 18:28:43.004831 kubelet[1713]: I0412 18:28:43.004803 1713 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:28:43.032456 kubelet[1713]: I0412 18:28:43.032404 1713 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:28:43.043895 kubelet[1713]: E0412 18:28:43.043869 1713 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:43.671845 systemd[1]: Reloading. Apr 12 18:28:43.701541 /usr/lib/systemd/system-generators/torcx-generator[2006]: time="2024-04-12T18:28:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:28:43.701567 /usr/lib/systemd/system-generators/torcx-generator[2006]: time="2024-04-12T18:28:43Z" level=info msg="torcx already run" Apr 12 18:28:43.780845 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:28:43.780865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:28:43.797920 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:28:43.867417 kubelet[1713]: I0412 18:28:43.867382 1713 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:28:43.867408 systemd[1]: Stopping kubelet.service... Apr 12 18:28:43.886956 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:28:43.887289 systemd[1]: Stopped kubelet.service. Apr 12 18:28:43.889112 systemd[1]: Started kubelet.service. Apr 12 18:28:43.938141 kubelet[2050]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:28:43.938141 kubelet[2050]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:28:43.938141 kubelet[2050]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:28:43.938709 kubelet[2050]: I0412 18:28:43.938115 2050 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:28:43.943730 kubelet[2050]: I0412 18:28:43.943702 2050 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:28:43.943730 kubelet[2050]: I0412 18:28:43.943725 2050 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:28:43.943938 kubelet[2050]: I0412 18:28:43.943910 2050 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:28:43.945346 kubelet[2050]: I0412 18:28:43.945298 2050 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:28:43.946526 kubelet[2050]: I0412 18:28:43.946508 2050 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:28:43.947965 kubelet[2050]: W0412 18:28:43.947946 2050 machine.go:65] Cannot read vendor id correctly, set empty. Apr 12 18:28:43.948715 kubelet[2050]: I0412 18:28:43.948690 2050 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:28:43.949042 kubelet[2050]: I0412 18:28:43.949026 2050 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:28:43.949101 kubelet[2050]: I0412 18:28:43.949090 2050 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:28:43.949169 kubelet[2050]: I0412 18:28:43.949109 2050 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:28:43.949169 kubelet[2050]: I0412 18:28:43.949119 2050 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:28:43.949169 kubelet[2050]: I0412 18:28:43.949150 2050 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:28:43.951164 kubelet[2050]: I0412 18:28:43.951146 2050 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:28:43.951164 kubelet[2050]: I0412 18:28:43.951166 2050 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:28:43.951260 kubelet[2050]: I0412 18:28:43.951188 2050 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:28:43.951260 kubelet[2050]: I0412 18:28:43.951209 2050 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:28:43.965338 kubelet[2050]: I0412 18:28:43.952034 2050 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:28:43.965338 kubelet[2050]: I0412 18:28:43.952392 2050 server.go:1168] "Started kubelet" Apr 12 18:28:43.965338 kubelet[2050]: I0412 18:28:43.954084 2050 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:28:43.965338 kubelet[2050]: I0412 18:28:43.955133 2050 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:28:43.965338 kubelet[2050]: I0412 18:28:43.955479 2050 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:28:43.965338 kubelet[2050]: E0412 18:28:43.955769 2050 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:28:43.965338 kubelet[2050]: I0412 18:28:43.955811 2050 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:28:43.965338 kubelet[2050]: I0412 18:28:43.955930 2050 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:28:43.965338 kubelet[2050]: I0412 18:28:43.956034 2050 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:28:43.966505 kubelet[2050]: E0412 18:28:43.966484 2050 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:28:43.968593 kubelet[2050]: E0412 18:28:43.966523 2050 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:28:43.977379 kubelet[2050]: I0412 18:28:43.977364 2050 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:28:43.978470 kubelet[2050]: I0412 18:28:43.978454 2050 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:28:43.978562 kubelet[2050]: I0412 18:28:43.978551 2050 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:28:43.978619 kubelet[2050]: I0412 18:28:43.978610 2050 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:28:43.978712 kubelet[2050]: E0412 18:28:43.978701 2050 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:28:44.033945 kubelet[2050]: I0412 18:28:44.033920 2050 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:28:44.033945 kubelet[2050]: I0412 18:28:44.033941 2050 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:28:44.034104 kubelet[2050]: I0412 18:28:44.033959 2050 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:28:44.034129 kubelet[2050]: I0412 18:28:44.034111 2050 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:28:44.034129 kubelet[2050]: I0412 18:28:44.034125 2050 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Apr 12 18:28:44.034173 kubelet[2050]: I0412 18:28:44.034131 2050 policy_none.go:49] "None policy: Start" Apr 12 18:28:44.034637 kubelet[2050]: I0412 18:28:44.034622 2050 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:28:44.034679 kubelet[2050]: I0412 18:28:44.034648 2050 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:28:44.034814 kubelet[2050]: I0412 18:28:44.034799 2050 state_mem.go:75] "Updated machine memory state" Apr 12 18:28:44.035876 kubelet[2050]: I0412 18:28:44.035852 2050 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:28:44.036088 kubelet[2050]: I0412 18:28:44.036068 2050 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:28:44.057282 sudo[2080]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:28:44.057810 sudo[2080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:28:44.059461 kubelet[2050]: I0412 18:28:44.059354 2050 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:28:44.067251 kubelet[2050]: I0412 18:28:44.067221 2050 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Apr 12 18:28:44.067374 kubelet[2050]: I0412 18:28:44.067361 2050 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Apr 12 18:28:44.079632 kubelet[2050]: I0412 18:28:44.079602 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:44.079725 kubelet[2050]: I0412 18:28:44.079673 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:44.079725 kubelet[2050]: I0412 18:28:44.079710 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:44.087058 kubelet[2050]: E0412 18:28:44.087025 2050 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:44.257890 kubelet[2050]: I0412 18:28:44.257834 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db6bd33fd440faff498dc84758ac9399-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"db6bd33fd440faff498dc84758ac9399\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:28:44.257890 kubelet[2050]: I0412 18:28:44.257892 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db6bd33fd440faff498dc84758ac9399-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"db6bd33fd440faff498dc84758ac9399\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:28:44.258037 kubelet[2050]: I0412 18:28:44.257916 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:44.258037 kubelet[2050]: I0412 18:28:44.257938 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:44.258037 kubelet[2050]: I0412 18:28:44.257962 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:44.258037 kubelet[2050]: I0412 18:28:44.257983 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:44.258037 kubelet[2050]: I0412 18:28:44.258001 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db6bd33fd440faff498dc84758ac9399-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"db6bd33fd440faff498dc84758ac9399\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:28:44.258161 kubelet[2050]: I0412 18:28:44.258022 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:28:44.258161 kubelet[2050]: I0412 18:28:44.258040 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7d78630cba827a770c684e2dbe6ce6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2f7d78630cba827a770c684e2dbe6ce6\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:28:44.384187 kubelet[2050]: E0412 18:28:44.384150 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:44.387457 kubelet[2050]: E0412 18:28:44.387416 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:44.387539 kubelet[2050]: E0412 18:28:44.387489 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:44.503032 sudo[2080]: pam_unix(sudo:session): session closed for user root Apr 12 18:28:44.952374 kubelet[2050]: I0412 18:28:44.952340 2050 apiserver.go:52] "Watching apiserver" Apr 12 18:28:44.956257 kubelet[2050]: I0412 18:28:44.956233 2050 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:28:44.963357 kubelet[2050]: I0412 18:28:44.963336 2050 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:28:44.998067 kubelet[2050]: E0412 18:28:44.998045 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:44.999394 kubelet[2050]: E0412 18:28:44.999012 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:44.999516 kubelet[2050]: E0412 18:28:44.999497 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:45.018741 kubelet[2050]: I0412 18:28:45.018040 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.018008155 podCreationTimestamp="2024-04-12 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:28:45.017813929 +0000 UTC m=+1.123923082" watchObservedRunningTime="2024-04-12 18:28:45.018008155 +0000 UTC m=+1.124117308" Apr 12 18:28:45.031273 kubelet[2050]: I0412 18:28:45.031208 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.031159836 podCreationTimestamp="2024-04-12 18:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:28:45.024385952 +0000 UTC m=+1.130495105" watchObservedRunningTime="2024-04-12 18:28:45.031159836 +0000 UTC m=+1.137268989" Apr 12 18:28:45.031372 kubelet[2050]: I0412 18:28:45.031286 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.031271919 podCreationTimestamp="2024-04-12 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:28:45.031149843 +0000 UTC m=+1.137258996" watchObservedRunningTime="2024-04-12 18:28:45.031271919 +0000 UTC m=+1.137381072" Apr 12 18:28:46.009538 kubelet[2050]: E0412 18:28:46.009511 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:46.017300 sudo[1294]: pam_unix(sudo:session): session closed for user root Apr 12 18:28:46.018958 sshd[1288]: pam_unix(sshd:session): session closed for user core Apr 12 18:28:46.021612 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:46928.service: Deactivated successfully. Apr 12 18:28:46.022795 systemd-logind[1163]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:28:46.022840 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:28:46.023854 systemd-logind[1163]: Removed session 5. Apr 12 18:28:49.953219 kubelet[2050]: E0412 18:28:49.953189 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:50.015541 kubelet[2050]: E0412 18:28:50.014429 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:50.146714 kubelet[2050]: E0412 18:28:50.146689 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:50.634091 kubelet[2050]: E0412 18:28:50.634056 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:51.015616 kubelet[2050]: E0412 18:28:51.015589 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:51.016534 kubelet[2050]: E0412 18:28:51.015681 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:52.016839 kubelet[2050]: E0412 18:28:52.016802 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:57.624111 kubelet[2050]: I0412 18:28:57.624081 2050 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:28:57.624772 env[1175]: time="2024-04-12T18:28:57.624682678Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:28:57.624989 kubelet[2050]: I0412 18:28:57.624878 2050 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:28:57.794147 kubelet[2050]: I0412 18:28:57.794109 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:57.806520 kubelet[2050]: I0412 18:28:57.806486 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:57.850347 kubelet[2050]: I0412 18:28:57.850317 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-etc-cni-netd\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850483 kubelet[2050]: I0412 18:28:57.850357 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4856726b-2910-4c15-805b-cd99088c5eb3-clustermesh-secrets\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850483 kubelet[2050]: I0412 18:28:57.850377 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-config-path\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850483 kubelet[2050]: I0412 18:28:57.850399 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-host-proc-sys-net\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850483 kubelet[2050]: I0412 18:28:57.850421 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szq2x\" (UniqueName: \"kubernetes.io/projected/646cdf3f-df18-4065-88c4-4af55677a160-kube-api-access-szq2x\") pod \"kube-proxy-fz866\" (UID: \"646cdf3f-df18-4065-88c4-4af55677a160\") " pod="kube-system/kube-proxy-fz866" Apr 12 18:28:57.850483 kubelet[2050]: I0412 18:28:57.850454 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-hubble-tls\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850615 kubelet[2050]: I0412 18:28:57.850474 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cni-path\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850615 kubelet[2050]: I0412 18:28:57.850492 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/646cdf3f-df18-4065-88c4-4af55677a160-kube-proxy\") pod \"kube-proxy-fz866\" (UID: \"646cdf3f-df18-4065-88c4-4af55677a160\") " pod="kube-system/kube-proxy-fz866" Apr 12 18:28:57.850615 kubelet[2050]: I0412 18:28:57.850511 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/646cdf3f-df18-4065-88c4-4af55677a160-lib-modules\") pod \"kube-proxy-fz866\" (UID: \"646cdf3f-df18-4065-88c4-4af55677a160\") " pod="kube-system/kube-proxy-fz866" Apr 12 18:28:57.850615 kubelet[2050]: I0412 18:28:57.850528 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-lib-modules\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850615 kubelet[2050]: I0412 18:28:57.850548 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-xtables-lock\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850615 kubelet[2050]: I0412 18:28:57.850577 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/646cdf3f-df18-4065-88c4-4af55677a160-xtables-lock\") pod \"kube-proxy-fz866\" (UID: \"646cdf3f-df18-4065-88c4-4af55677a160\") " pod="kube-system/kube-proxy-fz866" Apr 12 18:28:57.850744 kubelet[2050]: I0412 18:28:57.850600 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-host-proc-sys-kernel\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850744 kubelet[2050]: I0412 18:28:57.850619 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-hostproc\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850744 kubelet[2050]: I0412 18:28:57.850638 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-run\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850744 kubelet[2050]: I0412 18:28:57.850656 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-bpf-maps\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850744 kubelet[2050]: I0412 18:28:57.850674 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-cgroup\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.850744 kubelet[2050]: I0412 18:28:57.850692 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czmv\" (UniqueName: \"kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-kube-api-access-2czmv\") pod \"cilium-85628\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " pod="kube-system/cilium-85628" Apr 12 18:28:57.962984 kubelet[2050]: E0412 18:28:57.962947 2050 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 12 18:28:57.962984 kubelet[2050]: E0412 18:28:57.962982 2050 projected.go:198] Error preparing data for projected volume kube-api-access-szq2x for pod kube-system/kube-proxy-fz866: configmap "kube-root-ca.crt" not found Apr 12 18:28:57.963143 kubelet[2050]: E0412 18:28:57.963029 2050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/646cdf3f-df18-4065-88c4-4af55677a160-kube-api-access-szq2x podName:646cdf3f-df18-4065-88c4-4af55677a160 nodeName:}" failed. No retries permitted until 2024-04-12 18:28:58.463011458 +0000 UTC m=+14.569120611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-szq2x" (UniqueName: "kubernetes.io/projected/646cdf3f-df18-4065-88c4-4af55677a160-kube-api-access-szq2x") pod "kube-proxy-fz866" (UID: "646cdf3f-df18-4065-88c4-4af55677a160") : configmap "kube-root-ca.crt" not found Apr 12 18:28:57.963239 kubelet[2050]: E0412 18:28:57.963221 2050 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 12 18:28:57.963277 kubelet[2050]: E0412 18:28:57.963245 2050 projected.go:198] Error preparing data for projected volume kube-api-access-2czmv for pod kube-system/cilium-85628: configmap "kube-root-ca.crt" not found Apr 12 18:28:57.963277 kubelet[2050]: E0412 18:28:57.963273 2050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-kube-api-access-2czmv podName:4856726b-2910-4c15-805b-cd99088c5eb3 nodeName:}" failed. No retries permitted until 2024-04-12 18:28:58.46326374 +0000 UTC m=+14.569372893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2czmv" (UniqueName: "kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-kube-api-access-2czmv") pod "cilium-85628" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3") : configmap "kube-root-ca.crt" not found Apr 12 18:28:58.697989 kubelet[2050]: E0412 18:28:58.697948 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:58.699466 env[1175]: time="2024-04-12T18:28:58.699415866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fz866,Uid:646cdf3f-df18-4065-88c4-4af55677a160,Namespace:kube-system,Attempt:0,}" Apr 12 18:28:58.709824 kubelet[2050]: I0412 18:28:58.708913 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:28:58.712154 kubelet[2050]: E0412 18:28:58.711357 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:58.712224 env[1175]: time="2024-04-12T18:28:58.711852170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-85628,Uid:4856726b-2910-4c15-805b-cd99088c5eb3,Namespace:kube-system,Attempt:0,}" Apr 12 18:28:58.723058 env[1175]: time="2024-04-12T18:28:58.719939438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:28:58.723058 env[1175]: time="2024-04-12T18:28:58.720301761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:28:58.723058 env[1175]: time="2024-04-12T18:28:58.720319921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:28:58.723058 env[1175]: time="2024-04-12T18:28:58.722739341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5428ea21d156bb8f8c00d28ea2b6ca09af78f97f88ec7855c7863613355a41c2 pid=2144 runtime=io.containerd.runc.v2 Apr 12 18:28:58.733585 env[1175]: time="2024-04-12T18:28:58.733467311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:28:58.733585 env[1175]: time="2024-04-12T18:28:58.733570192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:28:58.733723 env[1175]: time="2024-04-12T18:28:58.733581672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:28:58.734150 env[1175]: time="2024-04-12T18:28:58.734096156Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d pid=2169 runtime=io.containerd.runc.v2 Apr 12 18:28:58.760706 kubelet[2050]: I0412 18:28:58.760652 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d897070-a39c-4820-b2d1-e9cd1976d75c-cilium-config-path\") pod \"cilium-operator-574c4bb98d-9dpbm\" (UID: \"6d897070-a39c-4820-b2d1-e9cd1976d75c\") " pod="kube-system/cilium-operator-574c4bb98d-9dpbm" Apr 12 18:28:58.760706 kubelet[2050]: I0412 18:28:58.760703 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz8tx\" (UniqueName: \"kubernetes.io/projected/6d897070-a39c-4820-b2d1-e9cd1976d75c-kube-api-access-vz8tx\") pod \"cilium-operator-574c4bb98d-9dpbm\" (UID: \"6d897070-a39c-4820-b2d1-e9cd1976d75c\") " pod="kube-system/cilium-operator-574c4bb98d-9dpbm" Apr 12 18:28:58.782310 env[1175]: time="2024-04-12T18:28:58.782268878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-85628,Uid:4856726b-2910-4c15-805b-cd99088c5eb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\"" Apr 12 18:28:58.783118 kubelet[2050]: E0412 18:28:58.783092 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:58.783232 env[1175]: time="2024-04-12T18:28:58.783136886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fz866,Uid:646cdf3f-df18-4065-88c4-4af55677a160,Namespace:kube-system,Attempt:0,} returns sandbox id \"5428ea21d156bb8f8c00d28ea2b6ca09af78f97f88ec7855c7863613355a41c2\"" Apr 12 18:28:58.785118 kubelet[2050]: E0412 18:28:58.784984 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:58.786290 env[1175]: time="2024-04-12T18:28:58.786255992Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:28:58.787394 env[1175]: time="2024-04-12T18:28:58.787141519Z" level=info msg="CreateContainer within sandbox \"5428ea21d156bb8f8c00d28ea2b6ca09af78f97f88ec7855c7863613355a41c2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:28:58.798755 env[1175]: time="2024-04-12T18:28:58.798712576Z" level=info msg="CreateContainer within sandbox \"5428ea21d156bb8f8c00d28ea2b6ca09af78f97f88ec7855c7863613355a41c2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03ebeb0cc78f71d4ae5ebccec96268e0f2f114e90eeb27daa6c227d7026698c8\"" Apr 12 18:28:58.799384 env[1175]: time="2024-04-12T18:28:58.799354261Z" level=info msg="StartContainer for \"03ebeb0cc78f71d4ae5ebccec96268e0f2f114e90eeb27daa6c227d7026698c8\"" Apr 12 18:28:58.860968 env[1175]: time="2024-04-12T18:28:58.860541052Z" level=info msg="StartContainer for \"03ebeb0cc78f71d4ae5ebccec96268e0f2f114e90eeb27daa6c227d7026698c8\" returns successfully" Apr 12 18:28:59.011741 kubelet[2050]: E0412 18:28:59.011697 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:59.012150 env[1175]: time="2024-04-12T18:28:59.012092274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-9dpbm,Uid:6d897070-a39c-4820-b2d1-e9cd1976d75c,Namespace:kube-system,Attempt:0,}" Apr 12 18:28:59.025109 env[1175]: time="2024-04-12T18:28:59.025038136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:28:59.025220 env[1175]: time="2024-04-12T18:28:59.025088057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:28:59.025220 env[1175]: time="2024-04-12T18:28:59.025099497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:28:59.025286 env[1175]: time="2024-04-12T18:28:59.025237578Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468 pid=2291 runtime=io.containerd.runc.v2 Apr 12 18:28:59.030076 kubelet[2050]: E0412 18:28:59.030034 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:28:59.041974 kubelet[2050]: I0412 18:28:59.041923 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fz866" podStartSLOduration=2.04188751 podCreationTimestamp="2024-04-12 18:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:28:59.041044623 +0000 UTC m=+15.147153776" watchObservedRunningTime="2024-04-12 18:28:59.04188751 +0000 UTC m=+15.147996663" Apr 12 18:28:59.084598 env[1175]: time="2024-04-12T18:28:59.084549048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-9dpbm,Uid:6d897070-a39c-4820-b2d1-e9cd1976d75c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468\"" Apr 12 18:28:59.085774 kubelet[2050]: E0412 18:28:59.085270 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:01.030812 update_engine[1167]: I0412 18:29:01.030766 1167 update_attempter.cc:509] Updating boot flags... Apr 12 18:29:02.191813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052182947.mount: Deactivated successfully. Apr 12 18:29:04.486316 env[1175]: time="2024-04-12T18:29:04.486262248Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:29:04.487566 env[1175]: time="2024-04-12T18:29:04.487533176Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:29:04.488884 env[1175]: time="2024-04-12T18:29:04.488856224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:29:04.489684 env[1175]: time="2024-04-12T18:29:04.489656629Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 12 18:29:04.490314 env[1175]: time="2024-04-12T18:29:04.490277953Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:29:04.493302 env[1175]: time="2024-04-12T18:29:04.493262251Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:29:04.502221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751859158.mount: Deactivated successfully. Apr 12 18:29:04.504813 env[1175]: time="2024-04-12T18:29:04.504769362Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\"" Apr 12 18:29:04.505163 env[1175]: time="2024-04-12T18:29:04.505136764Z" level=info msg="StartContainer for \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\"" Apr 12 18:29:04.560592 env[1175]: time="2024-04-12T18:29:04.560553186Z" level=info msg="StartContainer for \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\" returns successfully" Apr 12 18:29:04.688068 env[1175]: time="2024-04-12T18:29:04.688025411Z" level=info msg="shim disconnected" id=09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6 Apr 12 18:29:04.688068 env[1175]: time="2024-04-12T18:29:04.688067051Z" level=warning msg="cleaning up after shim disconnected" id=09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6 namespace=k8s.io Apr 12 18:29:04.688281 env[1175]: time="2024-04-12T18:29:04.688078091Z" level=info msg="cleaning up dead shim" Apr 12 18:29:04.697576 env[1175]: time="2024-04-12T18:29:04.697534710Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:29:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2481 runtime=io.containerd.runc.v2\n" Apr 12 18:29:05.046277 kubelet[2050]: E0412 18:29:05.046247 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:05.053706 env[1175]: time="2024-04-12T18:29:05.053661890Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:29:05.095126 env[1175]: time="2024-04-12T18:29:05.095069493Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\"" Apr 12 18:29:05.095789 env[1175]: time="2024-04-12T18:29:05.095763097Z" level=info msg="StartContainer for \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\"" Apr 12 18:29:05.148312 env[1175]: time="2024-04-12T18:29:05.148272446Z" level=info msg="StartContainer for \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\" returns successfully" Apr 12 18:29:05.158076 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:29:05.158372 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:29:05.158857 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:29:05.160370 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:29:05.169001 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:29:05.179655 env[1175]: time="2024-04-12T18:29:05.179611990Z" level=info msg="shim disconnected" id=8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27 Apr 12 18:29:05.179655 env[1175]: time="2024-04-12T18:29:05.179654950Z" level=warning msg="cleaning up after shim disconnected" id=8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27 namespace=k8s.io Apr 12 18:29:05.179821 env[1175]: time="2024-04-12T18:29:05.179664430Z" level=info msg="cleaning up dead shim" Apr 12 18:29:05.185449 env[1175]: time="2024-04-12T18:29:05.185411064Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:29:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2546 runtime=io.containerd.runc.v2\n" Apr 12 18:29:05.500269 systemd[1]: run-containerd-runc-k8s.io-09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6-runc.9Jg6Js.mount: Deactivated successfully. Apr 12 18:29:05.500419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6-rootfs.mount: Deactivated successfully. Apr 12 18:29:05.911825 env[1175]: time="2024-04-12T18:29:05.911526887Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:29:05.913012 env[1175]: time="2024-04-12T18:29:05.912986975Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:29:05.914631 env[1175]: time="2024-04-12T18:29:05.914601785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:29:05.915008 env[1175]: time="2024-04-12T18:29:05.914979947Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 12 18:29:05.918289 env[1175]: time="2024-04-12T18:29:05.918242006Z" level=info msg="CreateContainer within sandbox \"3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:29:05.925995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466381090.mount: Deactivated successfully. Apr 12 18:29:05.929242 env[1175]: time="2024-04-12T18:29:05.929203310Z" level=info msg="CreateContainer within sandbox \"3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\"" Apr 12 18:29:05.929673 env[1175]: time="2024-04-12T18:29:05.929640833Z" level=info msg="StartContainer for \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\"" Apr 12 18:29:05.992106 env[1175]: time="2024-04-12T18:29:05.992056279Z" level=info msg="StartContainer for \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\" returns successfully" Apr 12 18:29:06.050206 kubelet[2050]: E0412 18:29:06.049737 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:06.060658 env[1175]: time="2024-04-12T18:29:06.060608866Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:29:06.061923 kubelet[2050]: E0412 18:29:06.061869 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:06.072329 kubelet[2050]: I0412 18:29:06.072293 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-9dpbm" podStartSLOduration=1.2431194030000001 podCreationTimestamp="2024-04-12 18:28:58 +0000 UTC" firstStartedPulling="2024-04-12 18:28:59.0860089 +0000 UTC m=+15.192118013" lastFinishedPulling="2024-04-12 18:29:05.915147548 +0000 UTC m=+22.021256661" observedRunningTime="2024-04-12 18:29:06.07205673 +0000 UTC m=+22.178165883" watchObservedRunningTime="2024-04-12 18:29:06.072258051 +0000 UTC m=+22.178367204" Apr 12 18:29:06.080664 env[1175]: time="2024-04-12T18:29:06.080472297Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\"" Apr 12 18:29:06.081816 env[1175]: time="2024-04-12T18:29:06.081522423Z" level=info msg="StartContainer for \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\"" Apr 12 18:29:06.165479 env[1175]: time="2024-04-12T18:29:06.165366732Z" level=info msg="StartContainer for \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\" returns successfully" Apr 12 18:29:06.217855 env[1175]: time="2024-04-12T18:29:06.217805986Z" level=info msg="shim disconnected" id=c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049 Apr 12 18:29:06.217855 env[1175]: time="2024-04-12T18:29:06.217853946Z" level=warning msg="cleaning up after shim disconnected" id=c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049 namespace=k8s.io Apr 12 18:29:06.217855 env[1175]: time="2024-04-12T18:29:06.217863306Z" level=info msg="cleaning up dead shim" Apr 12 18:29:06.227058 env[1175]: time="2024-04-12T18:29:06.226999238Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:29:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2640 runtime=io.containerd.runc.v2\n" Apr 12 18:29:07.087372 kubelet[2050]: E0412 18:29:07.084368 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:07.087372 kubelet[2050]: E0412 18:29:07.084496 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:07.094128 env[1175]: time="2024-04-12T18:29:07.094080509Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:29:07.108529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215641034.mount: Deactivated successfully. Apr 12 18:29:07.112914 env[1175]: time="2024-04-12T18:29:07.112853009Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\"" Apr 12 18:29:07.113455 env[1175]: time="2024-04-12T18:29:07.113419972Z" level=info msg="StartContainer for \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\"" Apr 12 18:29:07.174953 env[1175]: time="2024-04-12T18:29:07.174910941Z" level=info msg="StartContainer for \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\" returns successfully" Apr 12 18:29:07.190126 env[1175]: time="2024-04-12T18:29:07.190074022Z" level=info msg="shim disconnected" id=534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434 Apr 12 18:29:07.190413 env[1175]: time="2024-04-12T18:29:07.190394064Z" level=warning msg="cleaning up after shim disconnected" id=534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434 namespace=k8s.io Apr 12 18:29:07.190502 env[1175]: time="2024-04-12T18:29:07.190487584Z" level=info msg="cleaning up dead shim" Apr 12 18:29:07.197530 env[1175]: time="2024-04-12T18:29:07.197495422Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:29:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2696 runtime=io.containerd.runc.v2\n" Apr 12 18:29:08.088867 kubelet[2050]: E0412 18:29:08.088711 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:08.092168 env[1175]: time="2024-04-12T18:29:08.092127662Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:29:08.110291 env[1175]: time="2024-04-12T18:29:08.110232275Z" level=info msg="CreateContainer within sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\"" Apr 12 18:29:08.111127 env[1175]: time="2024-04-12T18:29:08.111074559Z" level=info msg="StartContainer for \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\"" Apr 12 18:29:08.171777 env[1175]: time="2024-04-12T18:29:08.171734589Z" level=info msg="StartContainer for \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\" returns successfully" Apr 12 18:29:08.358063 kubelet[2050]: I0412 18:29:08.357971 2050 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 18:29:08.379561 kubelet[2050]: I0412 18:29:08.379529 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:29:08.380478 kubelet[2050]: I0412 18:29:08.380452 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:29:08.401462 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:29:08.432895 kubelet[2050]: I0412 18:29:08.432866 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wgs\" (UniqueName: \"kubernetes.io/projected/c6226dbc-c634-49b0-b572-40b828bcf957-kube-api-access-48wgs\") pod \"coredns-5d78c9869d-ksnmn\" (UID: \"c6226dbc-c634-49b0-b572-40b828bcf957\") " pod="kube-system/coredns-5d78c9869d-ksnmn" Apr 12 18:29:08.433047 kubelet[2050]: I0412 18:29:08.433034 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzx2h\" (UniqueName: \"kubernetes.io/projected/294d52ad-4320-4b41-a694-51db45c66a78-kube-api-access-gzx2h\") pod \"coredns-5d78c9869d-6zz6m\" (UID: \"294d52ad-4320-4b41-a694-51db45c66a78\") " pod="kube-system/coredns-5d78c9869d-6zz6m" Apr 12 18:29:08.433177 kubelet[2050]: I0412 18:29:08.433165 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6226dbc-c634-49b0-b572-40b828bcf957-config-volume\") pod \"coredns-5d78c9869d-ksnmn\" (UID: \"c6226dbc-c634-49b0-b572-40b828bcf957\") " pod="kube-system/coredns-5d78c9869d-ksnmn" Apr 12 18:29:08.433282 kubelet[2050]: I0412 18:29:08.433270 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/294d52ad-4320-4b41-a694-51db45c66a78-config-volume\") pod \"coredns-5d78c9869d-6zz6m\" (UID: \"294d52ad-4320-4b41-a694-51db45c66a78\") " pod="kube-system/coredns-5d78c9869d-6zz6m" Apr 12 18:29:08.632471 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:29:08.683068 kubelet[2050]: E0412 18:29:08.683013 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:08.683775 env[1175]: time="2024-04-12T18:29:08.683738723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-ksnmn,Uid:c6226dbc-c634-49b0-b572-40b828bcf957,Namespace:kube-system,Attempt:0,}" Apr 12 18:29:08.686132 kubelet[2050]: E0412 18:29:08.686109 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:08.686638 env[1175]: time="2024-04-12T18:29:08.686607378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-6zz6m,Uid:294d52ad-4320-4b41-a694-51db45c66a78,Namespace:kube-system,Attempt:0,}" Apr 12 18:29:09.093348 kubelet[2050]: E0412 18:29:09.093304 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:09.110492 kubelet[2050]: I0412 18:29:09.110013 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-85628" podStartSLOduration=6.40553035 podCreationTimestamp="2024-04-12 18:28:57 +0000 UTC" firstStartedPulling="2024-04-12 18:28:58.785622786 +0000 UTC m=+14.891731939" lastFinishedPulling="2024-04-12 18:29:04.490064791 +0000 UTC m=+20.596173984" observedRunningTime="2024-04-12 18:29:09.109558913 +0000 UTC m=+25.215668026" watchObservedRunningTime="2024-04-12 18:29:09.109972395 +0000 UTC m=+25.216081548" Apr 12 18:29:10.094679 kubelet[2050]: E0412 18:29:10.094644 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:10.242484 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:29:10.242574 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:29:10.240141 systemd-networkd[1059]: cilium_host: Link UP Apr 12 18:29:10.240699 systemd-networkd[1059]: cilium_net: Link UP Apr 12 18:29:10.242431 systemd-networkd[1059]: cilium_net: Gained carrier Apr 12 18:29:10.242612 systemd-networkd[1059]: cilium_host: Gained carrier Apr 12 18:29:10.323750 systemd-networkd[1059]: cilium_vxlan: Link UP Apr 12 18:29:10.323756 systemd-networkd[1059]: cilium_vxlan: Gained carrier Apr 12 18:29:10.616482 kernel: NET: Registered PF_ALG protocol family Apr 12 18:29:10.882740 systemd-networkd[1059]: cilium_net: Gained IPv6LL Apr 12 18:29:11.096232 kubelet[2050]: E0412 18:29:11.096196 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:11.139156 systemd-networkd[1059]: cilium_host: Gained IPv6LL Apr 12 18:29:11.169682 systemd-networkd[1059]: lxc_health: Link UP Apr 12 18:29:11.183620 systemd-networkd[1059]: lxc_health: Gained carrier Apr 12 18:29:11.184468 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:29:11.287557 systemd-networkd[1059]: lxc3d3cb5d8cd3b: Link UP Apr 12 18:29:11.295473 kernel: eth0: renamed from tmpee2c1 Apr 12 18:29:11.307757 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:29:11.307850 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3d3cb5d8cd3b: link becomes ready Apr 12 18:29:11.307889 systemd-networkd[1059]: lxc3d3cb5d8cd3b: Gained carrier Apr 12 18:29:11.308097 systemd-networkd[1059]: lxc45d756c307aa: Link UP Apr 12 18:29:11.317469 kernel: eth0: renamed from tmp325ca Apr 12 18:29:11.325490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:29:11.325573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc45d756c307aa: link becomes ready Apr 12 18:29:11.325403 systemd-networkd[1059]: lxc45d756c307aa: Gained carrier Apr 12 18:29:11.714569 systemd-networkd[1059]: cilium_vxlan: Gained IPv6LL Apr 12 18:29:12.716077 kubelet[2050]: E0412 18:29:12.716031 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:12.995587 systemd-networkd[1059]: lxc3d3cb5d8cd3b: Gained IPv6LL Apr 12 18:29:13.058551 systemd-networkd[1059]: lxc_health: Gained IPv6LL Apr 12 18:29:13.186548 systemd-networkd[1059]: lxc45d756c307aa: Gained IPv6LL Apr 12 18:29:14.908592 env[1175]: time="2024-04-12T18:29:14.908515255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:29:14.908592 env[1175]: time="2024-04-12T18:29:14.908552536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:29:14.909026 env[1175]: time="2024-04-12T18:29:14.908562936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:29:14.909325 env[1175]: time="2024-04-12T18:29:14.909289178Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/325ca9565c2d6d241a889cea9128b9fe229cc99dfceeed4393e96494d89a8c48 pid=3256 runtime=io.containerd.runc.v2 Apr 12 18:29:14.912957 env[1175]: time="2024-04-12T18:29:14.912522911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:29:14.912957 env[1175]: time="2024-04-12T18:29:14.912586831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:29:14.912957 env[1175]: time="2024-04-12T18:29:14.912613072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:29:14.912957 env[1175]: time="2024-04-12T18:29:14.912757192Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee2c18e99cbd6cd06af0fafeba924687c849cc87b4d1407f2295fa2fc8560d32 pid=3276 runtime=io.containerd.runc.v2 Apr 12 18:29:14.969247 systemd-resolved[1112]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:29:14.969632 systemd-resolved[1112]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:29:14.987588 env[1175]: time="2024-04-12T18:29:14.987543368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-ksnmn,Uid:c6226dbc-c634-49b0-b572-40b828bcf957,Namespace:kube-system,Attempt:0,} returns sandbox id \"325ca9565c2d6d241a889cea9128b9fe229cc99dfceeed4393e96494d89a8c48\"" Apr 12 18:29:14.988747 kubelet[2050]: E0412 18:29:14.988201 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:14.990811 env[1175]: time="2024-04-12T18:29:14.990775621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-6zz6m,Uid:294d52ad-4320-4b41-a694-51db45c66a78,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee2c18e99cbd6cd06af0fafeba924687c849cc87b4d1407f2295fa2fc8560d32\"" Apr 12 18:29:14.991464 kubelet[2050]: E0412 18:29:14.991312 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:14.996138 env[1175]: time="2024-04-12T18:29:14.996092922Z" level=info msg="CreateContainer within sandbox \"325ca9565c2d6d241a889cea9128b9fe229cc99dfceeed4393e96494d89a8c48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:29:14.996754 env[1175]: time="2024-04-12T18:29:14.996720284Z" level=info msg="CreateContainer within sandbox \"ee2c18e99cbd6cd06af0fafeba924687c849cc87b4d1407f2295fa2fc8560d32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:29:15.009269 env[1175]: time="2024-04-12T18:29:15.009213293Z" level=info msg="CreateContainer within sandbox \"325ca9565c2d6d241a889cea9128b9fe229cc99dfceeed4393e96494d89a8c48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95727a78606dbce1b7d49168848f66e85b8a9eb1db481f87bedfe34299ce4687\"" Apr 12 18:29:15.009847 env[1175]: time="2024-04-12T18:29:15.009803695Z" level=info msg="StartContainer for \"95727a78606dbce1b7d49168848f66e85b8a9eb1db481f87bedfe34299ce4687\"" Apr 12 18:29:15.012257 env[1175]: time="2024-04-12T18:29:15.012210824Z" level=info msg="CreateContainer within sandbox \"ee2c18e99cbd6cd06af0fafeba924687c849cc87b4d1407f2295fa2fc8560d32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c52fd9c8a895788b9c54ef9eba02156f0bb14479620ad82f6006b0e64691d47\"" Apr 12 18:29:15.013750 env[1175]: time="2024-04-12T18:29:15.013719310Z" level=info msg="StartContainer for \"1c52fd9c8a895788b9c54ef9eba02156f0bb14479620ad82f6006b0e64691d47\"" Apr 12 18:29:15.080201 env[1175]: time="2024-04-12T18:29:15.074943903Z" level=info msg="StartContainer for \"95727a78606dbce1b7d49168848f66e85b8a9eb1db481f87bedfe34299ce4687\" returns successfully" Apr 12 18:29:15.080201 env[1175]: time="2024-04-12T18:29:15.076037307Z" level=info msg="StartContainer for \"1c52fd9c8a895788b9c54ef9eba02156f0bb14479620ad82f6006b0e64691d47\" returns successfully" Apr 12 18:29:15.106865 kubelet[2050]: E0412 18:29:15.105458 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:15.109136 kubelet[2050]: E0412 18:29:15.109010 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:15.137980 kubelet[2050]: I0412 18:29:15.137945 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-ksnmn" podStartSLOduration=17.137908782 podCreationTimestamp="2024-04-12 18:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:29:15.13466869 +0000 UTC m=+31.240777803" watchObservedRunningTime="2024-04-12 18:29:15.137908782 +0000 UTC m=+31.244017935" Apr 12 18:29:15.138130 kubelet[2050]: I0412 18:29:15.138026 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-6zz6m" podStartSLOduration=17.138012423 podCreationTimestamp="2024-04-12 18:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:29:15.120799877 +0000 UTC m=+31.226909070" watchObservedRunningTime="2024-04-12 18:29:15.138012423 +0000 UTC m=+31.244121616" Apr 12 18:29:15.529065 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:51600.service. Apr 12 18:29:15.573999 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 51600 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:15.575555 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:15.578938 systemd-logind[1163]: New session 6 of user core. Apr 12 18:29:15.579787 systemd[1]: Started session-6.scope. Apr 12 18:29:15.725317 sshd[3411]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:15.727916 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:51600.service: Deactivated successfully. Apr 12 18:29:15.728969 systemd-logind[1163]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:29:15.729038 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:29:15.729851 systemd-logind[1163]: Removed session 6. Apr 12 18:29:16.111237 kubelet[2050]: E0412 18:29:16.111197 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:16.111811 kubelet[2050]: E0412 18:29:16.111796 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:16.481242 kubelet[2050]: I0412 18:29:16.481206 2050 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Apr 12 18:29:16.482016 kubelet[2050]: E0412 18:29:16.481990 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:17.113488 kubelet[2050]: E0412 18:29:17.113430 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:17.113898 kubelet[2050]: E0412 18:29:17.113874 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:17.114393 kubelet[2050]: E0412 18:29:17.114373 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:29:20.728761 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:48422.service. Apr 12 18:29:20.772217 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 48422 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:20.773291 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:20.776470 systemd-logind[1163]: New session 7 of user core. Apr 12 18:29:20.777292 systemd[1]: Started session-7.scope. Apr 12 18:29:20.889062 sshd[3431]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:20.891490 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:48422.service: Deactivated successfully. Apr 12 18:29:20.892494 systemd-logind[1163]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:29:20.892577 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:29:20.893304 systemd-logind[1163]: Removed session 7. Apr 12 18:29:25.892456 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:48424.service. Apr 12 18:29:25.937356 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 48424 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:25.938469 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:25.942392 systemd[1]: Started session-8.scope. Apr 12 18:29:25.942635 systemd-logind[1163]: New session 8 of user core. Apr 12 18:29:26.049089 sshd[3446]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:26.051326 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:48424.service: Deactivated successfully. Apr 12 18:29:26.052273 systemd-logind[1163]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:29:26.052331 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:29:26.053145 systemd-logind[1163]: Removed session 8. Apr 12 18:29:31.052355 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:50370.service. Apr 12 18:29:31.098762 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 50370 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:31.099867 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:31.104498 systemd[1]: Started session-9.scope. Apr 12 18:29:31.104852 systemd-logind[1163]: New session 9 of user core. Apr 12 18:29:31.224554 sshd[3465]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:31.226875 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:50378.service. Apr 12 18:29:31.231830 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:50370.service: Deactivated successfully. Apr 12 18:29:31.235497 systemd-logind[1163]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:29:31.235617 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:29:31.237184 systemd-logind[1163]: Removed session 9. Apr 12 18:29:31.275039 sshd[3478]: Accepted publickey for core from 10.0.0.1 port 50378 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:31.276521 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:31.281235 systemd[1]: Started session-10.scope. Apr 12 18:29:31.281615 systemd-logind[1163]: New session 10 of user core. Apr 12 18:29:32.005285 sshd[3478]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:32.011921 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:50394.service. Apr 12 18:29:32.012389 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:50378.service: Deactivated successfully. Apr 12 18:29:32.015389 systemd-logind[1163]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:29:32.015412 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:29:32.020745 systemd-logind[1163]: Removed session 10. Apr 12 18:29:32.063516 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 50394 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:32.065061 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:32.069131 systemd-logind[1163]: New session 11 of user core. Apr 12 18:29:32.069385 systemd[1]: Started session-11.scope. Apr 12 18:29:32.187271 sshd[3493]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:32.189713 systemd-logind[1163]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:29:32.189943 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:50394.service: Deactivated successfully. Apr 12 18:29:32.190769 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:29:32.191222 systemd-logind[1163]: Removed session 11. Apr 12 18:29:37.190722 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:50410.service. Apr 12 18:29:37.234619 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 50410 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:37.235878 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:37.239459 systemd-logind[1163]: New session 12 of user core. Apr 12 18:29:37.239882 systemd[1]: Started session-12.scope. Apr 12 18:29:37.349601 sshd[3508]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:37.351247 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:50414.service. Apr 12 18:29:37.352341 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:50410.service: Deactivated successfully. Apr 12 18:29:37.353454 systemd-logind[1163]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:29:37.353541 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:29:37.354153 systemd-logind[1163]: Removed session 12. Apr 12 18:29:37.396964 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 50414 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:37.398039 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:37.401029 systemd-logind[1163]: New session 13 of user core. Apr 12 18:29:37.401843 systemd[1]: Started session-13.scope. Apr 12 18:29:37.592808 sshd[3521]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:37.595084 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:50416.service. Apr 12 18:29:37.596566 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:50414.service: Deactivated successfully. Apr 12 18:29:37.597558 systemd-logind[1163]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:29:37.597615 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:29:37.598362 systemd-logind[1163]: Removed session 13. Apr 12 18:29:37.639259 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 50416 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:37.640926 sshd[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:37.645051 systemd-logind[1163]: New session 14 of user core. Apr 12 18:29:37.645930 systemd[1]: Started session-14.scope. Apr 12 18:29:38.465269 sshd[3533]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:38.468717 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:50432.service. Apr 12 18:29:38.471782 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:50416.service: Deactivated successfully. Apr 12 18:29:38.473108 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:29:38.473511 systemd-logind[1163]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:29:38.474689 systemd-logind[1163]: Removed session 14. Apr 12 18:29:38.527951 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 50432 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:38.529108 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:38.533414 systemd[1]: Started session-15.scope. Apr 12 18:29:38.533635 systemd-logind[1163]: New session 15 of user core. Apr 12 18:29:38.855653 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:50442.service. Apr 12 18:29:38.858864 sshd[3553]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:38.861798 systemd-logind[1163]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:29:38.863038 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:50432.service: Deactivated successfully. Apr 12 18:29:38.863870 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:29:38.865013 systemd-logind[1163]: Removed session 15. Apr 12 18:29:38.906677 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 50442 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:38.907928 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:38.912501 systemd-logind[1163]: New session 16 of user core. Apr 12 18:29:38.912594 systemd[1]: Started session-16.scope. Apr 12 18:29:39.033583 sshd[3565]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:39.036083 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:50442.service: Deactivated successfully. Apr 12 18:29:39.037206 systemd-logind[1163]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:29:39.037275 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:29:39.038058 systemd-logind[1163]: Removed session 16. Apr 12 18:29:44.037137 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:57834.service. Apr 12 18:29:44.080940 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 57834 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:44.082569 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:44.089186 systemd-logind[1163]: New session 17 of user core. Apr 12 18:29:44.089458 systemd[1]: Started session-17.scope. Apr 12 18:29:44.197988 sshd[3583]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:44.200455 systemd-logind[1163]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:29:44.200669 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:57834.service: Deactivated successfully. Apr 12 18:29:44.201527 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:29:44.201922 systemd-logind[1163]: Removed session 17. Apr 12 18:29:49.201033 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:45396.service. Apr 12 18:29:49.245265 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 45396 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:49.246801 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:49.250471 systemd-logind[1163]: New session 18 of user core. Apr 12 18:29:49.250924 systemd[1]: Started session-18.scope. Apr 12 18:29:49.358463 sshd[3600]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:49.360918 systemd-logind[1163]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:29:49.361135 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:45396.service: Deactivated successfully. Apr 12 18:29:49.361975 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:29:49.362405 systemd-logind[1163]: Removed session 18. Apr 12 18:29:54.361825 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:45402.service. Apr 12 18:29:54.408696 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 45402 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:54.409796 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:54.414661 systemd[1]: Started session-19.scope. Apr 12 18:29:54.414991 systemd-logind[1163]: New session 19 of user core. Apr 12 18:29:54.530132 sshd[3614]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:54.532490 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:45402.service: Deactivated successfully. Apr 12 18:29:54.533300 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:29:54.535645 systemd-logind[1163]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:29:54.536500 systemd-logind[1163]: Removed session 19. Apr 12 18:29:59.534074 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:60046.service. Apr 12 18:29:59.580879 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 60046 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:59.582106 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:59.586967 systemd[1]: Started session-20.scope. Apr 12 18:29:59.587293 systemd-logind[1163]: New session 20 of user core. Apr 12 18:29:59.721122 sshd[3630]: pam_unix(sshd:session): session closed for user core Apr 12 18:29:59.723596 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:60060.service. Apr 12 18:29:59.725751 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:60046.service: Deactivated successfully. Apr 12 18:29:59.726888 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:29:59.727266 systemd-logind[1163]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:29:59.727973 systemd-logind[1163]: Removed session 20. Apr 12 18:29:59.766724 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 60060 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:29:59.768328 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:59.772503 systemd-logind[1163]: New session 21 of user core. Apr 12 18:29:59.772976 systemd[1]: Started session-21.scope. Apr 12 18:30:01.183567 env[1175]: time="2024-04-12T18:30:01.183151259Z" level=info msg="StopContainer for \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\" with timeout 30 (s)" Apr 12 18:30:01.183567 env[1175]: time="2024-04-12T18:30:01.183472903Z" level=info msg="Stop container \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\" with signal terminated" Apr 12 18:30:01.198569 systemd[1]: run-containerd-runc-k8s.io-c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f-runc.jkTdf2.mount: Deactivated successfully. Apr 12 18:30:01.220679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4-rootfs.mount: Deactivated successfully. Apr 12 18:30:01.222263 env[1175]: time="2024-04-12T18:30:01.222180591Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:30:01.226339 env[1175]: time="2024-04-12T18:30:01.226307171Z" level=info msg="StopContainer for \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\" with timeout 1 (s)" Apr 12 18:30:01.226823 env[1175]: time="2024-04-12T18:30:01.226784498Z" level=info msg="Stop container \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\" with signal terminated" Apr 12 18:30:01.232112 env[1175]: time="2024-04-12T18:30:01.231042320Z" level=info msg="shim disconnected" id=35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4 Apr 12 18:30:01.232112 env[1175]: time="2024-04-12T18:30:01.231080081Z" level=warning msg="cleaning up after shim disconnected" id=35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4 namespace=k8s.io Apr 12 18:30:01.232112 env[1175]: time="2024-04-12T18:30:01.231088761Z" level=info msg="cleaning up dead shim" Apr 12 18:30:01.231857 systemd-networkd[1059]: lxc_health: Link DOWN Apr 12 18:30:01.231861 systemd-networkd[1059]: lxc_health: Lost carrier Apr 12 18:30:01.237977 env[1175]: time="2024-04-12T18:30:01.237943902Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3702 runtime=io.containerd.runc.v2\n" Apr 12 18:30:01.240129 env[1175]: time="2024-04-12T18:30:01.240090493Z" level=info msg="StopContainer for \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\" returns successfully" Apr 12 18:30:01.240721 env[1175]: time="2024-04-12T18:30:01.240690702Z" level=info msg="StopPodSandbox for \"3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468\"" Apr 12 18:30:01.240861 env[1175]: time="2024-04-12T18:30:01.240840784Z" level=info msg="Container to stop \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:30:01.243860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468-shm.mount: Deactivated successfully. Apr 12 18:30:01.269620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468-rootfs.mount: Deactivated successfully. Apr 12 18:30:01.274978 env[1175]: time="2024-04-12T18:30:01.274936084Z" level=info msg="shim disconnected" id=3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468 Apr 12 18:30:01.275670 env[1175]: time="2024-04-12T18:30:01.275645774Z" level=warning msg="cleaning up after shim disconnected" id=3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468 namespace=k8s.io Apr 12 18:30:01.275760 env[1175]: time="2024-04-12T18:30:01.275745615Z" level=info msg="cleaning up dead shim" Apr 12 18:30:01.281142 env[1175]: time="2024-04-12T18:30:01.281105374Z" level=info msg="shim disconnected" id=c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f Apr 12 18:30:01.281142 env[1175]: time="2024-04-12T18:30:01.281144055Z" level=warning msg="cleaning up after shim disconnected" id=c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f namespace=k8s.io Apr 12 18:30:01.281330 env[1175]: time="2024-04-12T18:30:01.281153255Z" level=info msg="cleaning up dead shim" Apr 12 18:30:01.283619 env[1175]: time="2024-04-12T18:30:01.283590050Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3751 runtime=io.containerd.runc.v2\n" Apr 12 18:30:01.284012 env[1175]: time="2024-04-12T18:30:01.283983296Z" level=info msg="TearDown network for sandbox \"3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468\" successfully" Apr 12 18:30:01.284099 env[1175]: time="2024-04-12T18:30:01.284081418Z" level=info msg="StopPodSandbox for \"3e0bd6e3f8d1bfa2ee1ebbda8d8877117b6f47d94cd143f9a3924895f5450468\" returns successfully" Apr 12 18:30:01.293649 env[1175]: time="2024-04-12T18:30:01.293606077Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3762 runtime=io.containerd.runc.v2\n" Apr 12 18:30:01.296383 env[1175]: time="2024-04-12T18:30:01.296332197Z" level=info msg="StopContainer for \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\" returns successfully" Apr 12 18:30:01.296814 env[1175]: time="2024-04-12T18:30:01.296774924Z" level=info msg="StopPodSandbox for \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\"" Apr 12 18:30:01.296880 env[1175]: time="2024-04-12T18:30:01.296839524Z" level=info msg="Container to stop \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:30:01.296880 env[1175]: time="2024-04-12T18:30:01.296854085Z" level=info msg="Container to stop \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:30:01.296880 env[1175]: time="2024-04-12T18:30:01.296866085Z" level=info msg="Container to stop \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:30:01.296880 env[1175]: time="2024-04-12T18:30:01.296877405Z" level=info msg="Container to stop \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:30:01.297013 env[1175]: time="2024-04-12T18:30:01.296888485Z" level=info msg="Container to stop \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:30:01.320001 env[1175]: time="2024-04-12T18:30:01.319957703Z" level=info msg="shim disconnected" id=d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d Apr 12 18:30:01.320001 env[1175]: time="2024-04-12T18:30:01.319999184Z" level=warning msg="cleaning up after shim disconnected" id=d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d namespace=k8s.io Apr 12 18:30:01.320210 env[1175]: time="2024-04-12T18:30:01.320009064Z" level=info msg="cleaning up dead shim" Apr 12 18:30:01.333854 env[1175]: time="2024-04-12T18:30:01.333805306Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3797 runtime=io.containerd.runc.v2\n" Apr 12 18:30:01.334125 env[1175]: time="2024-04-12T18:30:01.334081830Z" level=info msg="TearDown network for sandbox \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" successfully" Apr 12 18:30:01.334125 env[1175]: time="2024-04-12T18:30:01.334113231Z" level=info msg="StopPodSandbox for \"d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d\" returns successfully" Apr 12 18:30:01.409891 kubelet[2050]: I0412 18:30:01.409842 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d897070-a39c-4820-b2d1-e9cd1976d75c-cilium-config-path\") pod \"6d897070-a39c-4820-b2d1-e9cd1976d75c\" (UID: \"6d897070-a39c-4820-b2d1-e9cd1976d75c\") " Apr 12 18:30:01.410328 kubelet[2050]: I0412 18:30:01.409907 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cni-path\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.410328 kubelet[2050]: I0412 18:30:01.409954 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-hubble-tls\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.410328 kubelet[2050]: I0412 18:30:01.409972 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-host-proc-sys-kernel\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.410328 kubelet[2050]: I0412 18:30:01.409993 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz8tx\" (UniqueName: \"kubernetes.io/projected/6d897070-a39c-4820-b2d1-e9cd1976d75c-kube-api-access-vz8tx\") pod \"6d897070-a39c-4820-b2d1-e9cd1976d75c\" (UID: \"6d897070-a39c-4820-b2d1-e9cd1976d75c\") " Apr 12 18:30:01.410328 kubelet[2050]: I0412 18:30:01.410013 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-cgroup\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.410328 kubelet[2050]: I0412 18:30:01.410034 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4856726b-2910-4c15-805b-cd99088c5eb3-clustermesh-secrets\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.410514 kubelet[2050]: I0412 18:30:01.410052 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-host-proc-sys-net\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.410514 kubelet[2050]: I0412 18:30:01.410069 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-xtables-lock\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.410514 kubelet[2050]: I0412 18:30:01.410086 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-bpf-maps\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.410514 kubelet[2050]: I0412 18:30:01.410347 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.410514 kubelet[2050]: I0412 18:30:01.410404 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cni-path" (OuterVolumeSpecName: "cni-path") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.410629 kubelet[2050]: I0412 18:30:01.410470 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.410654 kubelet[2050]: I0412 18:30:01.410631 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.410681 kubelet[2050]: I0412 18:30:01.410664 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.410770 kubelet[2050]: I0412 18:30:01.410682 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.410911 kubelet[2050]: W0412 18:30:01.410875 2050 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6d897070-a39c-4820-b2d1-e9cd1976d75c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:30:01.412929 kubelet[2050]: I0412 18:30:01.412897 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d897070-a39c-4820-b2d1-e9cd1976d75c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d897070-a39c-4820-b2d1-e9cd1976d75c" (UID: "6d897070-a39c-4820-b2d1-e9cd1976d75c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:30:01.413304 kubelet[2050]: I0412 18:30:01.413269 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:30:01.413475 kubelet[2050]: I0412 18:30:01.413431 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d897070-a39c-4820-b2d1-e9cd1976d75c-kube-api-access-vz8tx" (OuterVolumeSpecName: "kube-api-access-vz8tx") pod "6d897070-a39c-4820-b2d1-e9cd1976d75c" (UID: "6d897070-a39c-4820-b2d1-e9cd1976d75c"). InnerVolumeSpecName "kube-api-access-vz8tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:30:01.413549 kubelet[2050]: I0412 18:30:01.413534 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4856726b-2910-4c15-805b-cd99088c5eb3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:30:01.510803 kubelet[2050]: I0412 18:30:01.510783 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2czmv\" (UniqueName: \"kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-kube-api-access-2czmv\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.510939 kubelet[2050]: I0412 18:30:01.510927 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-lib-modules\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.511017 kubelet[2050]: I0412 18:30:01.511007 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-hostproc\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.511079 kubelet[2050]: I0412 18:30:01.511054 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-hostproc" (OuterVolumeSpecName: "hostproc") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.511079 kubelet[2050]: I0412 18:30:01.511025 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.511153 kubelet[2050]: I0412 18:30:01.511138 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.511216 kubelet[2050]: I0412 18:30:01.511204 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-etc-cni-netd\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.511293 kubelet[2050]: I0412 18:30:01.511283 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-config-path\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.511362 kubelet[2050]: I0412 18:30:01.511353 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-run\") pod \"4856726b-2910-4c15-805b-cd99088c5eb3\" (UID: \"4856726b-2910-4c15-805b-cd99088c5eb3\") " Apr 12 18:30:01.511462 kubelet[2050]: I0412 18:30:01.511420 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:01.511511 kubelet[2050]: W0412 18:30:01.511491 2050 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4856726b-2910-4c15-805b-cd99088c5eb3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:30:01.511554 kubelet[2050]: I0412 18:30:01.511434 2050 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.511619 kubelet[2050]: I0412 18:30:01.511609 2050 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.511683 kubelet[2050]: I0412 18:30:01.511674 2050 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.511750 kubelet[2050]: I0412 18:30:01.511740 2050 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.511802 kubelet[2050]: I0412 18:30:01.511794 2050 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.511860 kubelet[2050]: I0412 18:30:01.511851 2050 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.511916 kubelet[2050]: I0412 18:30:01.511907 2050 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.512028 kubelet[2050]: I0412 18:30:01.512018 2050 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.512084 kubelet[2050]: I0412 18:30:01.512075 2050 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.512146 kubelet[2050]: I0412 18:30:01.512137 2050 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vz8tx\" (UniqueName: \"kubernetes.io/projected/6d897070-a39c-4820-b2d1-e9cd1976d75c-kube-api-access-vz8tx\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.512221 kubelet[2050]: I0412 18:30:01.512209 2050 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d897070-a39c-4820-b2d1-e9cd1976d75c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.512284 kubelet[2050]: I0412 18:30:01.512274 2050 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.512346 kubelet[2050]: I0412 18:30:01.512337 2050 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4856726b-2910-4c15-805b-cd99088c5eb3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.513092 kubelet[2050]: I0412 18:30:01.513057 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-kube-api-access-2czmv" (OuterVolumeSpecName: "kube-api-access-2czmv") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "kube-api-access-2czmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:30:01.513336 kubelet[2050]: I0412 18:30:01.513299 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4856726b-2910-4c15-805b-cd99088c5eb3" (UID: "4856726b-2910-4c15-805b-cd99088c5eb3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:30:01.613432 kubelet[2050]: I0412 18:30:01.613403 2050 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.613432 kubelet[2050]: I0412 18:30:01.613429 2050 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4856726b-2910-4c15-805b-cd99088c5eb3-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:01.613547 kubelet[2050]: I0412 18:30:01.613448 2050 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2czmv\" (UniqueName: \"kubernetes.io/projected/4856726b-2910-4c15-805b-cd99088c5eb3-kube-api-access-2czmv\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:02.190355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f-rootfs.mount: Deactivated successfully. Apr 12 18:30:02.190516 systemd[1]: var-lib-kubelet-pods-6d897070\x2da39c\x2d4820\x2db2d1\x2de9cd1976d75c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvz8tx.mount: Deactivated successfully. Apr 12 18:30:02.190633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d-rootfs.mount: Deactivated successfully. Apr 12 18:30:02.190727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d74cd9c53c44d9028a58e1b2f47799d374b2225bc2944921d66229a58f09be2d-shm.mount: Deactivated successfully. Apr 12 18:30:02.190807 systemd[1]: var-lib-kubelet-pods-4856726b\x2d2910\x2d4c15\x2d805b\x2dcd99088c5eb3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2czmv.mount: Deactivated successfully. Apr 12 18:30:02.190891 systemd[1]: var-lib-kubelet-pods-4856726b\x2d2910\x2d4c15\x2d805b\x2dcd99088c5eb3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:30:02.190973 systemd[1]: var-lib-kubelet-pods-4856726b\x2d2910\x2d4c15\x2d805b\x2dcd99088c5eb3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:30:02.194519 kubelet[2050]: I0412 18:30:02.194494 2050 scope.go:115] "RemoveContainer" containerID="35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4" Apr 12 18:30:02.196010 env[1175]: time="2024-04-12T18:30:02.195972229Z" level=info msg="RemoveContainer for \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\"" Apr 12 18:30:02.199570 env[1175]: time="2024-04-12T18:30:02.199533000Z" level=info msg="RemoveContainer for \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\" returns successfully" Apr 12 18:30:02.199859 kubelet[2050]: I0412 18:30:02.199814 2050 scope.go:115] "RemoveContainer" containerID="35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4" Apr 12 18:30:02.200492 env[1175]: time="2024-04-12T18:30:02.200361052Z" level=error msg="ContainerStatus for \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\": not found" Apr 12 18:30:02.201569 kubelet[2050]: E0412 18:30:02.201547 2050 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\": not found" containerID="35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4" Apr 12 18:30:02.201920 kubelet[2050]: I0412 18:30:02.201901 2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4} err="failed to get container status \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"35acd081d7dfdb8026d6d37c37331be35cc7ec5bc716986f46cdb5a288a323a4\": not found" Apr 12 18:30:02.205071 kubelet[2050]: I0412 18:30:02.205046 2050 scope.go:115] "RemoveContainer" containerID="c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f" Apr 12 18:30:02.208075 env[1175]: time="2024-04-12T18:30:02.208047042Z" level=info msg="RemoveContainer for \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\"" Apr 12 18:30:02.210369 env[1175]: time="2024-04-12T18:30:02.210342435Z" level=info msg="RemoveContainer for \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\" returns successfully" Apr 12 18:30:02.210586 kubelet[2050]: I0412 18:30:02.210571 2050 scope.go:115] "RemoveContainer" containerID="534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434" Apr 12 18:30:02.215164 env[1175]: time="2024-04-12T18:30:02.215133503Z" level=info msg="RemoveContainer for \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\"" Apr 12 18:30:02.217352 env[1175]: time="2024-04-12T18:30:02.217312855Z" level=info msg="RemoveContainer for \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\" returns successfully" Apr 12 18:30:02.217605 kubelet[2050]: I0412 18:30:02.217579 2050 scope.go:115] "RemoveContainer" containerID="c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049" Apr 12 18:30:02.218479 env[1175]: time="2024-04-12T18:30:02.218451911Z" level=info msg="RemoveContainer for \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\"" Apr 12 18:30:02.220364 env[1175]: time="2024-04-12T18:30:02.220326698Z" level=info msg="RemoveContainer for \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\" returns successfully" Apr 12 18:30:02.220544 kubelet[2050]: I0412 18:30:02.220526 2050 scope.go:115] "RemoveContainer" containerID="8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27" Apr 12 18:30:02.221452 env[1175]: time="2024-04-12T18:30:02.221419353Z" level=info msg="RemoveContainer for \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\"" Apr 12 18:30:02.223586 env[1175]: time="2024-04-12T18:30:02.223552584Z" level=info msg="RemoveContainer for \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\" returns successfully" Apr 12 18:30:02.223800 kubelet[2050]: I0412 18:30:02.223785 2050 scope.go:115] "RemoveContainer" containerID="09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6" Apr 12 18:30:02.224689 env[1175]: time="2024-04-12T18:30:02.224662640Z" level=info msg="RemoveContainer for \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\"" Apr 12 18:30:02.227005 env[1175]: time="2024-04-12T18:30:02.226969273Z" level=info msg="RemoveContainer for \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\" returns successfully" Apr 12 18:30:02.227165 kubelet[2050]: I0412 18:30:02.227148 2050 scope.go:115] "RemoveContainer" containerID="c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f" Apr 12 18:30:02.227491 env[1175]: time="2024-04-12T18:30:02.227402639Z" level=error msg="ContainerStatus for \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\": not found" Apr 12 18:30:02.227629 kubelet[2050]: E0412 18:30:02.227615 2050 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\": not found" containerID="c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f" Apr 12 18:30:02.227708 kubelet[2050]: I0412 18:30:02.227697 2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f} err="failed to get container status \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c433f83be4b2f18fab892e636eb62776be25abfa0fd07b7e908072bbfa8a5d2f\": not found" Apr 12 18:30:02.227781 kubelet[2050]: I0412 18:30:02.227771 2050 scope.go:115] "RemoveContainer" containerID="534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434" Apr 12 18:30:02.228066 env[1175]: time="2024-04-12T18:30:02.227977327Z" level=error msg="ContainerStatus for \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\": not found" Apr 12 18:30:02.228225 kubelet[2050]: E0412 18:30:02.228197 2050 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\": not found" containerID="534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434" Apr 12 18:30:02.228310 kubelet[2050]: I0412 18:30:02.228299 2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434} err="failed to get container status \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\": rpc error: code = NotFound desc = an error occurred when try to find container \"534a1bff3fdfa11c049376ac713867746b662e80b61ff4534206789926b02434\": not found" Apr 12 18:30:02.228371 kubelet[2050]: I0412 18:30:02.228361 2050 scope.go:115] "RemoveContainer" containerID="c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049" Apr 12 18:30:02.228641 env[1175]: time="2024-04-12T18:30:02.228580816Z" level=error msg="ContainerStatus for \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\": not found" Apr 12 18:30:02.228797 kubelet[2050]: E0412 18:30:02.228782 2050 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\": not found" containerID="c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049" Apr 12 18:30:02.228879 kubelet[2050]: I0412 18:30:02.228868 2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049} err="failed to get container status \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\": rpc error: code = NotFound desc = an error occurred when try to find container \"c33b1c75a1456e4fce748f143e5beee37813bc76a934c61623b4cc57c0c15049\": not found" Apr 12 18:30:02.228934 kubelet[2050]: I0412 18:30:02.228925 2050 scope.go:115] "RemoveContainer" containerID="8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27" Apr 12 18:30:02.229160 env[1175]: time="2024-04-12T18:30:02.229118023Z" level=error msg="ContainerStatus for \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\": not found" Apr 12 18:30:02.229315 kubelet[2050]: E0412 18:30:02.229301 2050 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\": not found" containerID="8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27" Apr 12 18:30:02.229399 kubelet[2050]: I0412 18:30:02.229388 2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27} err="failed to get container status \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\": rpc error: code = NotFound desc = an error occurred when try to find container \"8824d7f30913105de686205069306da7bfefd15e5645e4aa0eb70c98fe57ff27\": not found" Apr 12 18:30:02.229483 kubelet[2050]: I0412 18:30:02.229473 2050 scope.go:115] "RemoveContainer" containerID="09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6" Apr 12 18:30:02.229696 env[1175]: time="2024-04-12T18:30:02.229659871Z" level=error msg="ContainerStatus for \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\": not found" Apr 12 18:30:02.229836 kubelet[2050]: E0412 18:30:02.229823 2050 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\": not found" containerID="09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6" Apr 12 18:30:02.229911 kubelet[2050]: I0412 18:30:02.229900 2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6} err="failed to get container status \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\": rpc error: code = NotFound desc = an error occurred when try to find container \"09d99803593f9fc8b394f3d543ae694e01ef5654cc0359541a15374f343f0de6\": not found" Apr 12 18:30:03.155413 sshd[3642]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:03.157979 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:60076.service. Apr 12 18:30:03.160312 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:60060.service: Deactivated successfully. Apr 12 18:30:03.161414 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:30:03.162264 systemd-logind[1163]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:30:03.163117 systemd-logind[1163]: Removed session 21. Apr 12 18:30:03.201774 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 60076 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:30:03.203158 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:03.206497 systemd-logind[1163]: New session 22 of user core. Apr 12 18:30:03.207098 systemd[1]: Started session-22.scope. Apr 12 18:30:03.962310 sshd[3813]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:03.964210 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:60080.service. Apr 12 18:30:03.977897 kubelet[2050]: I0412 18:30:03.977828 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:30:03.978229 kubelet[2050]: E0412 18:30:03.978027 2050 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4856726b-2910-4c15-805b-cd99088c5eb3" containerName="mount-bpf-fs" Apr 12 18:30:03.978229 kubelet[2050]: E0412 18:30:03.978052 2050 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4856726b-2910-4c15-805b-cd99088c5eb3" containerName="clean-cilium-state" Apr 12 18:30:03.978229 kubelet[2050]: E0412 18:30:03.978060 2050 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4856726b-2910-4c15-805b-cd99088c5eb3" containerName="mount-cgroup" Apr 12 18:30:03.978229 kubelet[2050]: E0412 18:30:03.978067 2050 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4856726b-2910-4c15-805b-cd99088c5eb3" containerName="apply-sysctl-overwrites" Apr 12 18:30:03.978619 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:60076.service: Deactivated successfully. Apr 12 18:30:03.979646 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:30:03.979731 systemd-logind[1163]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:30:03.991466 kubelet[2050]: E0412 18:30:03.991023 2050 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d897070-a39c-4820-b2d1-e9cd1976d75c" containerName="cilium-operator" Apr 12 18:30:03.991466 kubelet[2050]: E0412 18:30:03.991067 2050 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4856726b-2910-4c15-805b-cd99088c5eb3" containerName="cilium-agent" Apr 12 18:30:03.991466 kubelet[2050]: I0412 18:30:03.991201 2050 memory_manager.go:346] "RemoveStaleState removing state" podUID="6d897070-a39c-4820-b2d1-e9cd1976d75c" containerName="cilium-operator" Apr 12 18:30:03.991466 kubelet[2050]: I0412 18:30:03.991217 2050 memory_manager.go:346] "RemoveStaleState removing state" podUID="4856726b-2910-4c15-805b-cd99088c5eb3" containerName="cilium-agent" Apr 12 18:30:03.998848 systemd-logind[1163]: Removed session 22. Apr 12 18:30:04.003466 kubelet[2050]: I0412 18:30:04.002143 2050 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4856726b-2910-4c15-805b-cd99088c5eb3 path="/var/lib/kubelet/pods/4856726b-2910-4c15-805b-cd99088c5eb3/volumes" Apr 12 18:30:04.003466 kubelet[2050]: I0412 18:30:04.002845 2050 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=6d897070-a39c-4820-b2d1-e9cd1976d75c path="/var/lib/kubelet/pods/6d897070-a39c-4820-b2d1-e9cd1976d75c/volumes" Apr 12 18:30:04.027679 kubelet[2050]: I0412 18:30:04.027646 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-config-path\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027784 kubelet[2050]: I0412 18:30:04.027697 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-run\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027784 kubelet[2050]: I0412 18:30:04.027718 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-lib-modules\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027784 kubelet[2050]: I0412 18:30:04.027737 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzqqz\" (UniqueName: \"kubernetes.io/projected/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-kube-api-access-fzqqz\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027784 kubelet[2050]: I0412 18:30:04.027765 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cni-path\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027784 kubelet[2050]: I0412 18:30:04.027784 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-ipsec-secrets\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027905 kubelet[2050]: I0412 18:30:04.027803 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-etc-cni-netd\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027905 kubelet[2050]: I0412 18:30:04.027830 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-host-proc-sys-net\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027905 kubelet[2050]: I0412 18:30:04.027851 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-hostproc\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027905 kubelet[2050]: I0412 18:30:04.027871 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-xtables-lock\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.027905 kubelet[2050]: I0412 18:30:04.027890 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-clustermesh-secrets\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.028007 kubelet[2050]: I0412 18:30:04.027918 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-host-proc-sys-kernel\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.028007 kubelet[2050]: I0412 18:30:04.027936 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-hubble-tls\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.028007 kubelet[2050]: I0412 18:30:04.027953 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-bpf-maps\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.028007 kubelet[2050]: I0412 18:30:04.027971 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-cgroup\") pod \"cilium-44hpd\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " pod="kube-system/cilium-44hpd" Apr 12 18:30:04.032332 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 60080 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:30:04.033559 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:04.037208 systemd-logind[1163]: New session 23 of user core. Apr 12 18:30:04.037642 systemd[1]: Started session-23.scope. Apr 12 18:30:04.056947 kubelet[2050]: E0412 18:30:04.056929 2050 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:30:04.171128 sshd[3826]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:04.175392 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:60084.service. Apr 12 18:30:04.180184 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:60080.service: Deactivated successfully. Apr 12 18:30:04.181864 systemd-logind[1163]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:30:04.181874 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:30:04.187642 systemd-logind[1163]: Removed session 23. Apr 12 18:30:04.223487 sshd[3844]: Accepted publickey for core from 10.0.0.1 port 60084 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:30:04.223861 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:04.227730 systemd[1]: Started session-24.scope. Apr 12 18:30:04.227747 systemd-logind[1163]: New session 24 of user core. Apr 12 18:30:04.298086 kubelet[2050]: E0412 18:30:04.298048 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:04.299464 env[1175]: time="2024-04-12T18:30:04.299344042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-44hpd,Uid:ab3189d0-281b-48f8-acf1-87a6ab7d0a93,Namespace:kube-system,Attempt:0,}" Apr 12 18:30:04.312829 env[1175]: time="2024-04-12T18:30:04.312750864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:30:04.312829 env[1175]: time="2024-04-12T18:30:04.312791465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:30:04.312991 env[1175]: time="2024-04-12T18:30:04.312808825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:30:04.313201 env[1175]: time="2024-04-12T18:30:04.313162350Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0 pid=3864 runtime=io.containerd.runc.v2 Apr 12 18:30:04.364311 env[1175]: time="2024-04-12T18:30:04.364268566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-44hpd,Uid:ab3189d0-281b-48f8-acf1-87a6ab7d0a93,Namespace:kube-system,Attempt:0,} returns sandbox id \"72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0\"" Apr 12 18:30:04.364925 kubelet[2050]: E0412 18:30:04.364906 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:04.367381 env[1175]: time="2024-04-12T18:30:04.367320368Z" level=info msg="CreateContainer within sandbox \"72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:30:04.387849 env[1175]: time="2024-04-12T18:30:04.387798567Z" level=info msg="CreateContainer within sandbox \"72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75\"" Apr 12 18:30:04.388656 env[1175]: time="2024-04-12T18:30:04.388632378Z" level=info msg="StartContainer for \"8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75\"" Apr 12 18:30:04.448979 env[1175]: time="2024-04-12T18:30:04.448926360Z" level=info msg="StartContainer for \"8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75\" returns successfully" Apr 12 18:30:04.478016 env[1175]: time="2024-04-12T18:30:04.477912555Z" level=info msg="shim disconnected" id=8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75 Apr 12 18:30:04.478016 env[1175]: time="2024-04-12T18:30:04.477956555Z" level=warning msg="cleaning up after shim disconnected" id=8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75 namespace=k8s.io Apr 12 18:30:04.478016 env[1175]: time="2024-04-12T18:30:04.477966275Z" level=info msg="cleaning up dead shim" Apr 12 18:30:04.484684 env[1175]: time="2024-04-12T18:30:04.484650847Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3946 runtime=io.containerd.runc.v2\n" Apr 12 18:30:05.215958 env[1175]: time="2024-04-12T18:30:05.213289387Z" level=info msg="StopPodSandbox for \"72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0\"" Apr 12 18:30:05.215958 env[1175]: time="2024-04-12T18:30:05.213344868Z" level=info msg="Container to stop \"8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:30:05.214939 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0-shm.mount: Deactivated successfully. Apr 12 18:30:05.237802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0-rootfs.mount: Deactivated successfully. Apr 12 18:30:05.242037 env[1175]: time="2024-04-12T18:30:05.241986409Z" level=info msg="shim disconnected" id=72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0 Apr 12 18:30:05.242037 env[1175]: time="2024-04-12T18:30:05.242035290Z" level=warning msg="cleaning up after shim disconnected" id=72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0 namespace=k8s.io Apr 12 18:30:05.242201 env[1175]: time="2024-04-12T18:30:05.242045290Z" level=info msg="cleaning up dead shim" Apr 12 18:30:05.248384 env[1175]: time="2024-04-12T18:30:05.248339574Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3978 runtime=io.containerd.runc.v2\n" Apr 12 18:30:05.248680 env[1175]: time="2024-04-12T18:30:05.248642298Z" level=info msg="TearDown network for sandbox \"72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0\" successfully" Apr 12 18:30:05.248680 env[1175]: time="2024-04-12T18:30:05.248674698Z" level=info msg="StopPodSandbox for \"72085fafa93c1dd0b8764dd5c18672b9d0d2b8a6fc6772bf1d8071e78092e3b0\" returns successfully" Apr 12 18:30:05.335939 kubelet[2050]: I0412 18:30:05.335881 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-cgroup\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.335939 kubelet[2050]: I0412 18:30:05.335939 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-clustermesh-secrets\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336399 kubelet[2050]: I0412 18:30:05.335971 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cni-path\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336399 kubelet[2050]: I0412 18:30:05.336002 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-bpf-maps\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336399 kubelet[2050]: I0412 18:30:05.336039 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-ipsec-secrets\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336399 kubelet[2050]: I0412 18:30:05.336069 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-run\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336399 kubelet[2050]: I0412 18:30:05.336107 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzqqz\" (UniqueName: \"kubernetes.io/projected/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-kube-api-access-fzqqz\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336399 kubelet[2050]: I0412 18:30:05.336150 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-etc-cni-netd\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336571 kubelet[2050]: I0412 18:30:05.336168 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-lib-modules\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336571 kubelet[2050]: I0412 18:30:05.336188 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-host-proc-sys-net\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336571 kubelet[2050]: I0412 18:30:05.336204 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-xtables-lock\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336571 kubelet[2050]: I0412 18:30:05.336223 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-hubble-tls\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336571 kubelet[2050]: I0412 18:30:05.336250 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-config-path\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336571 kubelet[2050]: I0412 18:30:05.336267 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-hostproc\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336709 kubelet[2050]: I0412 18:30:05.336285 2050 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-host-proc-sys-kernel\") pod \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\" (UID: \"ab3189d0-281b-48f8-acf1-87a6ab7d0a93\") " Apr 12 18:30:05.336709 kubelet[2050]: I0412 18:30:05.336341 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336709 kubelet[2050]: I0412 18:30:05.336368 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336709 kubelet[2050]: I0412 18:30:05.336669 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336796 kubelet[2050]: I0412 18:30:05.336722 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336796 kubelet[2050]: I0412 18:30:05.336766 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cni-path" (OuterVolumeSpecName: "cni-path") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336796 kubelet[2050]: I0412 18:30:05.336783 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336866 kubelet[2050]: I0412 18:30:05.336800 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336866 kubelet[2050]: I0412 18:30:05.336815 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336866 kubelet[2050]: I0412 18:30:05.336829 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.336866 kubelet[2050]: I0412 18:30:05.336849 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-hostproc" (OuterVolumeSpecName: "hostproc") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:30:05.337038 kubelet[2050]: W0412 18:30:05.336962 2050 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ab3189d0-281b-48f8-acf1-87a6ab7d0a93/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:30:05.338762 kubelet[2050]: I0412 18:30:05.338724 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:30:05.340560 systemd[1]: var-lib-kubelet-pods-ab3189d0\x2d281b\x2d48f8\x2dacf1\x2d87a6ab7d0a93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzqqz.mount: Deactivated successfully. Apr 12 18:30:05.340722 systemd[1]: var-lib-kubelet-pods-ab3189d0\x2d281b\x2d48f8\x2dacf1\x2d87a6ab7d0a93-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:30:05.340808 systemd[1]: var-lib-kubelet-pods-ab3189d0\x2d281b\x2d48f8\x2dacf1\x2d87a6ab7d0a93-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:30:05.340971 kubelet[2050]: I0412 18:30:05.340949 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-kube-api-access-fzqqz" (OuterVolumeSpecName: "kube-api-access-fzqqz") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "kube-api-access-fzqqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:30:05.341054 kubelet[2050]: I0412 18:30:05.340974 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:30:05.342511 kubelet[2050]: I0412 18:30:05.342483 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:30:05.343794 kubelet[2050]: I0412 18:30:05.343772 2050 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ab3189d0-281b-48f8-acf1-87a6ab7d0a93" (UID: "ab3189d0-281b-48f8-acf1-87a6ab7d0a93"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:30:05.437314 kubelet[2050]: I0412 18:30:05.437284 2050 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437314 kubelet[2050]: I0412 18:30:05.437316 2050 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437463 kubelet[2050]: I0412 18:30:05.437327 2050 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437463 kubelet[2050]: I0412 18:30:05.437338 2050 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437463 kubelet[2050]: I0412 18:30:05.437348 2050 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437463 kubelet[2050]: I0412 18:30:05.437358 2050 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437463 kubelet[2050]: I0412 18:30:05.437367 2050 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437463 kubelet[2050]: I0412 18:30:05.437381 2050 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437463 kubelet[2050]: I0412 18:30:05.437390 2050 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437463 kubelet[2050]: I0412 18:30:05.437398 2050 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437649 kubelet[2050]: I0412 18:30:05.437407 2050 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437649 kubelet[2050]: I0412 18:30:05.437416 2050 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437649 kubelet[2050]: I0412 18:30:05.437426 2050 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fzqqz\" (UniqueName: \"kubernetes.io/projected/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-kube-api-access-fzqqz\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437649 kubelet[2050]: I0412 18:30:05.437434 2050 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:05.437649 kubelet[2050]: I0412 18:30:05.437466 2050 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab3189d0-281b-48f8-acf1-87a6ab7d0a93-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:30:06.023356 kubelet[2050]: I0412 18:30:06.023329 2050 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-12 18:30:06.023265236 +0000 UTC m=+82.129374389 LastTransitionTime:2024-04-12 18:30:06.023265236 +0000 UTC m=+82.129374389 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Apr 12 18:30:06.133886 systemd[1]: var-lib-kubelet-pods-ab3189d0\x2d281b\x2d48f8\x2dacf1\x2d87a6ab7d0a93-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:30:06.215704 kubelet[2050]: I0412 18:30:06.215679 2050 scope.go:115] "RemoveContainer" containerID="8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75" Apr 12 18:30:06.217305 env[1175]: time="2024-04-12T18:30:06.216600708Z" level=info msg="RemoveContainer for \"8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75\"" Apr 12 18:30:06.219340 env[1175]: time="2024-04-12T18:30:06.219306623Z" level=info msg="RemoveContainer for \"8b0fe5ee14cc1636a20417de4a8a1f110c54ddc9178f983a1d3ea3e47836fd75\" returns successfully" Apr 12 18:30:06.243608 kubelet[2050]: I0412 18:30:06.243564 2050 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:30:06.243722 kubelet[2050]: E0412 18:30:06.243630 2050 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab3189d0-281b-48f8-acf1-87a6ab7d0a93" containerName="mount-cgroup" Apr 12 18:30:06.243722 kubelet[2050]: I0412 18:30:06.243655 2050 memory_manager.go:346] "RemoveStaleState removing state" podUID="ab3189d0-281b-48f8-acf1-87a6ab7d0a93" containerName="mount-cgroup" Apr 12 18:30:06.342377 kubelet[2050]: I0412 18:30:06.342278 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-lib-modules\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.342377 kubelet[2050]: I0412 18:30:06.342320 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/522ae82d-a211-4045-ad9a-2b7f4b4bf059-cilium-ipsec-secrets\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.343492 kubelet[2050]: I0412 18:30:06.343468 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-host-proc-sys-net\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.343650 kubelet[2050]: I0412 18:30:06.343627 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/522ae82d-a211-4045-ad9a-2b7f4b4bf059-cilium-config-path\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.343788 kubelet[2050]: I0412 18:30:06.343765 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-cni-path\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.343918 kubelet[2050]: I0412 18:30:06.343896 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-cilium-cgroup\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.344030 kubelet[2050]: I0412 18:30:06.344018 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-xtables-lock\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.344146 kubelet[2050]: I0412 18:30:06.344134 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-host-proc-sys-kernel\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.344257 kubelet[2050]: I0412 18:30:06.344237 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fprd\" (UniqueName: \"kubernetes.io/projected/522ae82d-a211-4045-ad9a-2b7f4b4bf059-kube-api-access-8fprd\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.344383 kubelet[2050]: I0412 18:30:06.344356 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-cilium-run\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.344580 kubelet[2050]: I0412 18:30:06.344566 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-hostproc\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.344684 kubelet[2050]: I0412 18:30:06.344673 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-etc-cni-netd\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.344800 kubelet[2050]: I0412 18:30:06.344791 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/522ae82d-a211-4045-ad9a-2b7f4b4bf059-hubble-tls\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.344902 kubelet[2050]: I0412 18:30:06.344892 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/522ae82d-a211-4045-ad9a-2b7f4b4bf059-bpf-maps\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.345036 kubelet[2050]: I0412 18:30:06.345025 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/522ae82d-a211-4045-ad9a-2b7f4b4bf059-clustermesh-secrets\") pod \"cilium-js5nf\" (UID: \"522ae82d-a211-4045-ad9a-2b7f4b4bf059\") " pod="kube-system/cilium-js5nf" Apr 12 18:30:06.546557 kubelet[2050]: E0412 18:30:06.546533 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:06.547468 env[1175]: time="2024-04-12T18:30:06.547233643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-js5nf,Uid:522ae82d-a211-4045-ad9a-2b7f4b4bf059,Namespace:kube-system,Attempt:0,}" Apr 12 18:30:06.558208 env[1175]: time="2024-04-12T18:30:06.558141865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:30:06.558208 env[1175]: time="2024-04-12T18:30:06.558179386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:30:06.558208 env[1175]: time="2024-04-12T18:30:06.558189906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:30:06.558546 env[1175]: time="2024-04-12T18:30:06.558510790Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d pid=4008 runtime=io.containerd.runc.v2 Apr 12 18:30:06.590888 env[1175]: time="2024-04-12T18:30:06.590836650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-js5nf,Uid:522ae82d-a211-4045-ad9a-2b7f4b4bf059,Namespace:kube-system,Attempt:0,} returns sandbox id \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\"" Apr 12 18:30:06.591409 kubelet[2050]: E0412 18:30:06.591386 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:06.594911 env[1175]: time="2024-04-12T18:30:06.593287042Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:30:06.602568 env[1175]: time="2024-04-12T18:30:06.602517122Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"97708737c4ad78fb4a33043d93da0d575549d1f95548fd74fe4ea1b57f1964fe\"" Apr 12 18:30:06.604295 env[1175]: time="2024-04-12T18:30:06.604260784Z" level=info msg="StartContainer for \"97708737c4ad78fb4a33043d93da0d575549d1f95548fd74fe4ea1b57f1964fe\"" Apr 12 18:30:06.659615 env[1175]: time="2024-04-12T18:30:06.659461981Z" level=info msg="StartContainer for \"97708737c4ad78fb4a33043d93da0d575549d1f95548fd74fe4ea1b57f1964fe\" returns successfully" Apr 12 18:30:06.686348 env[1175]: time="2024-04-12T18:30:06.686301330Z" level=info msg="shim disconnected" id=97708737c4ad78fb4a33043d93da0d575549d1f95548fd74fe4ea1b57f1964fe Apr 12 18:30:06.686591 env[1175]: time="2024-04-12T18:30:06.686572254Z" level=warning msg="cleaning up after shim disconnected" id=97708737c4ad78fb4a33043d93da0d575549d1f95548fd74fe4ea1b57f1964fe namespace=k8s.io Apr 12 18:30:06.686656 env[1175]: time="2024-04-12T18:30:06.686643095Z" level=info msg="cleaning up dead shim" Apr 12 18:30:06.693219 env[1175]: time="2024-04-12T18:30:06.693189380Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4090 runtime=io.containerd.runc.v2\n" Apr 12 18:30:07.219763 kubelet[2050]: E0412 18:30:07.219737 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:07.227025 env[1175]: time="2024-04-12T18:30:07.226980287Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:30:07.237762 env[1175]: time="2024-04-12T18:30:07.237704743Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"299d1103459ccfabc1a67aa289862e6831968add3761fc9f265303e91412ccb0\"" Apr 12 18:30:07.238350 env[1175]: time="2024-04-12T18:30:07.238304871Z" level=info msg="StartContainer for \"299d1103459ccfabc1a67aa289862e6831968add3761fc9f265303e91412ccb0\"" Apr 12 18:30:07.293130 env[1175]: time="2024-04-12T18:30:07.293083966Z" level=info msg="StartContainer for \"299d1103459ccfabc1a67aa289862e6831968add3761fc9f265303e91412ccb0\" returns successfully" Apr 12 18:30:07.314407 env[1175]: time="2024-04-12T18:30:07.314364276Z" level=info msg="shim disconnected" id=299d1103459ccfabc1a67aa289862e6831968add3761fc9f265303e91412ccb0 Apr 12 18:30:07.314407 env[1175]: time="2024-04-12T18:30:07.314405597Z" level=warning msg="cleaning up after shim disconnected" id=299d1103459ccfabc1a67aa289862e6831968add3761fc9f265303e91412ccb0 namespace=k8s.io Apr 12 18:30:07.314407 env[1175]: time="2024-04-12T18:30:07.314414917Z" level=info msg="cleaning up dead shim" Apr 12 18:30:07.321129 env[1175]: time="2024-04-12T18:30:07.321082001Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4153 runtime=io.containerd.runc.v2\n" Apr 12 18:30:07.981624 kubelet[2050]: I0412 18:30:07.981576 2050 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ab3189d0-281b-48f8-acf1-87a6ab7d0a93 path="/var/lib/kubelet/pods/ab3189d0-281b-48f8-acf1-87a6ab7d0a93/volumes" Apr 12 18:30:08.134074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-299d1103459ccfabc1a67aa289862e6831968add3761fc9f265303e91412ccb0-rootfs.mount: Deactivated successfully. Apr 12 18:30:08.223158 kubelet[2050]: E0412 18:30:08.223133 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:08.225877 env[1175]: time="2024-04-12T18:30:08.225839059Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:30:08.240052 env[1175]: time="2024-04-12T18:30:08.239965394Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"87141387047159a65d00ff421dab914076ecccee85294e8f89a5b8553b81cbc5\"" Apr 12 18:30:08.240895 env[1175]: time="2024-04-12T18:30:08.240866325Z" level=info msg="StartContainer for \"87141387047159a65d00ff421dab914076ecccee85294e8f89a5b8553b81cbc5\"" Apr 12 18:30:08.295609 env[1175]: time="2024-04-12T18:30:08.295561083Z" level=info msg="StartContainer for \"87141387047159a65d00ff421dab914076ecccee85294e8f89a5b8553b81cbc5\" returns successfully" Apr 12 18:30:08.317185 env[1175]: time="2024-04-12T18:30:08.317138471Z" level=info msg="shim disconnected" id=87141387047159a65d00ff421dab914076ecccee85294e8f89a5b8553b81cbc5 Apr 12 18:30:08.317185 env[1175]: time="2024-04-12T18:30:08.317182751Z" level=warning msg="cleaning up after shim disconnected" id=87141387047159a65d00ff421dab914076ecccee85294e8f89a5b8553b81cbc5 namespace=k8s.io Apr 12 18:30:08.317406 env[1175]: time="2024-04-12T18:30:08.317191952Z" level=info msg="cleaning up dead shim" Apr 12 18:30:08.323431 env[1175]: time="2024-04-12T18:30:08.323384108Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4210 runtime=io.containerd.runc.v2\n" Apr 12 18:30:09.058153 kubelet[2050]: E0412 18:30:09.058115 2050 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:30:09.134150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87141387047159a65d00ff421dab914076ecccee85294e8f89a5b8553b81cbc5-rootfs.mount: Deactivated successfully. Apr 12 18:30:09.226824 kubelet[2050]: E0412 18:30:09.226762 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:09.229757 env[1175]: time="2024-04-12T18:30:09.229280677Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:30:09.240023 env[1175]: time="2024-04-12T18:30:09.239970046Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"031bbb3fd777efff0e41b0279cf2a8924a458f49f5c671424f33e5199906632a\"" Apr 12 18:30:09.241661 env[1175]: time="2024-04-12T18:30:09.240603214Z" level=info msg="StartContainer for \"031bbb3fd777efff0e41b0279cf2a8924a458f49f5c671424f33e5199906632a\"" Apr 12 18:30:09.302424 env[1175]: time="2024-04-12T18:30:09.302379522Z" level=info msg="StartContainer for \"031bbb3fd777efff0e41b0279cf2a8924a458f49f5c671424f33e5199906632a\" returns successfully" Apr 12 18:30:09.323677 env[1175]: time="2024-04-12T18:30:09.323570939Z" level=info msg="shim disconnected" id=031bbb3fd777efff0e41b0279cf2a8924a458f49f5c671424f33e5199906632a Apr 12 18:30:09.323877 env[1175]: time="2024-04-12T18:30:09.323856903Z" level=warning msg="cleaning up after shim disconnected" id=031bbb3fd777efff0e41b0279cf2a8924a458f49f5c671424f33e5199906632a namespace=k8s.io Apr 12 18:30:09.323952 env[1175]: time="2024-04-12T18:30:09.323939744Z" level=info msg="cleaning up dead shim" Apr 12 18:30:09.330363 env[1175]: time="2024-04-12T18:30:09.330324461Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:30:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4265 runtime=io.containerd.runc.v2\n" Apr 12 18:30:10.134177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-031bbb3fd777efff0e41b0279cf2a8924a458f49f5c671424f33e5199906632a-rootfs.mount: Deactivated successfully. Apr 12 18:30:10.230584 kubelet[2050]: E0412 18:30:10.230544 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:10.233528 env[1175]: time="2024-04-12T18:30:10.233487740Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:30:10.243831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2098447023.mount: Deactivated successfully. Apr 12 18:30:10.248270 env[1175]: time="2024-04-12T18:30:10.248231155Z" level=info msg="CreateContainer within sandbox \"18d3144b9618c8fa416b234af89c50537ec5ab7460d6132e9c3291d32026406d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"396637e555a38b1bc562a21645c637dbf8b9482f8a4a4109de52478071aa2cf6\"" Apr 12 18:30:10.249116 env[1175]: time="2024-04-12T18:30:10.249089925Z" level=info msg="StartContainer for \"396637e555a38b1bc562a21645c637dbf8b9482f8a4a4109de52478071aa2cf6\"" Apr 12 18:30:10.313542 env[1175]: time="2024-04-12T18:30:10.313494007Z" level=info msg="StartContainer for \"396637e555a38b1bc562a21645c637dbf8b9482f8a4a4109de52478071aa2cf6\" returns successfully" Apr 12 18:30:10.542750 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Apr 12 18:30:10.979982 kubelet[2050]: E0412 18:30:10.979896 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:11.234470 kubelet[2050]: E0412 18:30:11.234380 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:11.247556 kubelet[2050]: I0412 18:30:11.247518 2050 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-js5nf" podStartSLOduration=5.247486282 podCreationTimestamp="2024-04-12 18:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:30:11.246872515 +0000 UTC m=+87.352981708" watchObservedRunningTime="2024-04-12 18:30:11.247486282 +0000 UTC m=+87.353595395" Apr 12 18:30:12.498415 systemd[1]: run-containerd-runc-k8s.io-396637e555a38b1bc562a21645c637dbf8b9482f8a4a4109de52478071aa2cf6-runc.18qUoa.mount: Deactivated successfully. Apr 12 18:30:12.548255 kubelet[2050]: E0412 18:30:12.548221 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:13.226102 systemd-networkd[1059]: lxc_health: Link UP Apr 12 18:30:13.236467 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:30:13.238304 systemd-networkd[1059]: lxc_health: Gained carrier Apr 12 18:30:14.370602 systemd-networkd[1059]: lxc_health: Gained IPv6LL Apr 12 18:30:14.548836 kubelet[2050]: E0412 18:30:14.548808 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:14.618581 systemd[1]: run-containerd-runc-k8s.io-396637e555a38b1bc562a21645c637dbf8b9482f8a4a4109de52478071aa2cf6-runc.j61zRC.mount: Deactivated successfully. Apr 12 18:30:14.979612 kubelet[2050]: E0412 18:30:14.979585 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:14.980452 kubelet[2050]: E0412 18:30:14.980424 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:15.244883 kubelet[2050]: E0412 18:30:15.244779 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:16.246879 kubelet[2050]: E0412 18:30:16.246845 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:16.781733 systemd[1]: run-containerd-runc-k8s.io-396637e555a38b1bc562a21645c637dbf8b9482f8a4a4109de52478071aa2cf6-runc.WfQK46.mount: Deactivated successfully. Apr 12 18:30:17.980206 kubelet[2050]: E0412 18:30:17.980171 2050 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:30:18.991559 sshd[3844]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:18.993617 systemd-logind[1163]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:30:18.993872 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:60084.service: Deactivated successfully. Apr 12 18:30:18.994650 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:30:18.995045 systemd-logind[1163]: Removed session 24.