Feb 9 09:56:00.725583 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:56:00.725602 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:56:00.725610 kernel: efi: EFI v2.70 by EDK II Feb 9 09:56:00.725615 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 09:56:00.725620 kernel: random: crng init done Feb 9 09:56:00.725625 kernel: ACPI: Early table checksum verification disabled Feb 9 09:56:00.725632 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 09:56:00.725638 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 09:56:00.725644 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725649 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725654 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725660 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725665 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725671 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725678 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725684 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725690 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:00.725696 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 09:56:00.725701 kernel: NUMA: Failed to initialise from firmware Feb 9 09:56:00.725707 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:56:00.725712 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 09:56:00.725718 kernel: Zone ranges: Feb 9 09:56:00.725723 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:56:00.725730 kernel: DMA32 empty Feb 9 09:56:00.725735 kernel: Normal empty Feb 9 09:56:00.725741 kernel: Movable zone start for each node Feb 9 09:56:00.725746 kernel: Early memory node ranges Feb 9 09:56:00.725752 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 09:56:00.725758 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 09:56:00.725763 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 09:56:00.725769 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 09:56:00.725774 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 09:56:00.725780 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 09:56:00.725785 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 09:56:00.725791 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:56:00.725797 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 09:56:00.725803 kernel: psci: probing for conduit method from ACPI. Feb 9 09:56:00.725809 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:56:00.725814 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:56:00.725820 kernel: psci: Trusted OS migration not required Feb 9 09:56:00.725828 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:56:00.725834 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 09:56:00.725841 kernel: ACPI: SRAT not present Feb 9 09:56:00.725848 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:56:00.725854 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:56:00.725860 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 09:56:00.725866 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:56:00.725872 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:56:00.725878 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:56:00.725884 kernel: CPU features: detected: Spectre-v4 Feb 9 09:56:00.725890 kernel: CPU features: detected: Spectre-BHB Feb 9 09:56:00.725897 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:56:00.725903 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:56:00.725909 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:56:00.725915 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 09:56:00.725921 kernel: Policy zone: DMA Feb 9 09:56:00.725928 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:56:00.725936 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:56:00.725942 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:56:00.725948 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:56:00.725955 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:56:00.725961 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 09:56:00.725969 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 09:56:00.725975 kernel: trace event string verifier disabled Feb 9 09:56:00.725981 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:56:00.725988 kernel: rcu: RCU event tracing is enabled. Feb 9 09:56:00.725994 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 09:56:00.726000 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:56:00.726006 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:56:00.726012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:56:00.726018 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 09:56:00.726024 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:56:00.726030 kernel: GICv3: 256 SPIs implemented Feb 9 09:56:00.726037 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:56:00.726043 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:56:00.726049 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:56:00.726055 kernel: GICv3: 16 PPIs implemented Feb 9 09:56:00.726062 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 09:56:00.726067 kernel: ACPI: SRAT not present Feb 9 09:56:00.726073 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 09:56:00.726080 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:56:00.726086 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:56:00.726092 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 09:56:00.726098 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 09:56:00.726109 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:00.726118 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:56:00.726125 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:56:00.726131 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:56:00.726137 kernel: arm-pv: using stolen time PV Feb 9 09:56:00.726143 kernel: Console: colour dummy device 80x25 Feb 9 09:56:00.726149 kernel: ACPI: Core revision 20210730 Feb 9 09:56:00.726156 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:56:00.726162 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:56:00.726168 kernel: LSM: Security Framework initializing Feb 9 09:56:00.726174 kernel: SELinux: Initializing. Feb 9 09:56:00.726182 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:56:00.726188 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:56:00.726228 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:56:00.726235 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 09:56:00.726241 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 09:56:00.726248 kernel: Remapping and enabling EFI services. Feb 9 09:56:00.726256 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:56:00.726262 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:56:00.726269 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 09:56:00.726279 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 09:56:00.726285 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:00.726292 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:56:00.726298 kernel: Detected PIPT I-cache on CPU2 Feb 9 09:56:00.726305 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 09:56:00.726311 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 09:56:00.726318 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:00.726324 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 09:56:00.726330 kernel: Detected PIPT I-cache on CPU3 Feb 9 09:56:00.726336 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 09:56:00.726345 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 09:56:00.726351 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:00.726357 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 09:56:00.726364 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 09:56:00.726374 kernel: SMP: Total of 4 processors activated. Feb 9 09:56:00.726382 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:56:00.726388 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:56:00.726395 kernel: CPU features: detected: Common not Private translations Feb 9 09:56:00.726401 kernel: CPU features: detected: CRC32 instructions Feb 9 09:56:00.726408 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:56:00.726414 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:56:00.726421 kernel: CPU features: detected: Privileged Access Never Feb 9 09:56:00.726428 kernel: CPU features: detected: RAS Extension Support Feb 9 09:56:00.726435 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 09:56:00.726441 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:56:00.726448 kernel: alternatives: patching kernel code Feb 9 09:56:00.726455 kernel: devtmpfs: initialized Feb 9 09:56:00.726462 kernel: KASLR enabled Feb 9 09:56:00.726469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:56:00.726475 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 09:56:00.726482 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:56:00.726488 kernel: SMBIOS 3.0.0 present. Feb 9 09:56:00.726495 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 09:56:00.726501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:56:00.726508 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:56:00.726514 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:56:00.726522 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:56:00.726529 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:56:00.726535 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Feb 9 09:56:00.726542 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:56:00.726548 kernel: cpuidle: using governor menu Feb 9 09:56:00.726555 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:56:00.726561 kernel: ASID allocator initialised with 32768 entries Feb 9 09:56:00.726568 kernel: ACPI: bus type PCI registered Feb 9 09:56:00.726574 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:56:00.726582 kernel: Serial: AMBA PL011 UART driver Feb 9 09:56:00.726589 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:56:00.726595 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:56:00.726602 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:56:00.726608 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:56:00.726615 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:56:00.726621 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:56:00.726628 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:56:00.726635 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:56:00.726642 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:56:00.726649 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:56:00.726655 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:56:00.726661 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:56:00.726668 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:56:00.726675 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:56:00.726681 kernel: ACPI: Interpreter enabled Feb 9 09:56:00.726688 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:56:00.726694 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:56:00.726702 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:56:00.726708 kernel: printk: console [ttyAMA0] enabled Feb 9 09:56:00.726715 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 09:56:00.726830 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:56:00.726894 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:56:00.726953 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:56:00.727011 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 09:56:00.727073 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 09:56:00.727082 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 09:56:00.727089 kernel: PCI host bridge to bus 0000:00 Feb 9 09:56:00.727172 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 09:56:00.727248 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:56:00.727304 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 09:56:00.727371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 09:56:00.727445 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 09:56:00.727512 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 09:56:00.727573 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 09:56:00.727634 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 09:56:00.727693 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:56:00.727752 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:56:00.727811 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 09:56:00.727872 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 09:56:00.727924 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 09:56:00.727976 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:56:00.728031 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 09:56:00.728040 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:56:00.728046 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:56:00.728053 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:56:00.728061 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:56:00.728068 kernel: iommu: Default domain type: Translated Feb 9 09:56:00.728074 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:56:00.728081 kernel: vgaarb: loaded Feb 9 09:56:00.728087 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:56:00.728094 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:56:00.728101 kernel: PTP clock support registered Feb 9 09:56:00.728113 kernel: Registered efivars operations Feb 9 09:56:00.728120 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:56:00.728126 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:56:00.728134 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:56:00.728141 kernel: pnp: PnP ACPI init Feb 9 09:56:00.728220 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 09:56:00.728231 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:56:00.728237 kernel: NET: Registered PF_INET protocol family Feb 9 09:56:00.728244 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:56:00.728251 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:56:00.728257 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:56:00.728266 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:56:00.728272 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:56:00.728279 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:56:00.728286 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:56:00.728292 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:56:00.728299 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:56:00.728306 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:56:00.728313 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 09:56:00.728320 kernel: kvm [1]: HYP mode not available Feb 9 09:56:00.728327 kernel: Initialise system trusted keyrings Feb 9 09:56:00.728333 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:56:00.728340 kernel: Key type asymmetric registered Feb 9 09:56:00.728346 kernel: Asymmetric key parser 'x509' registered Feb 9 09:56:00.728353 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:56:00.728359 kernel: io scheduler mq-deadline registered Feb 9 09:56:00.728366 kernel: io scheduler kyber registered Feb 9 09:56:00.728372 kernel: io scheduler bfq registered Feb 9 09:56:00.728379 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:56:00.728386 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:56:00.728393 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:56:00.728456 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 09:56:00.728465 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:56:00.728471 kernel: thunder_xcv, ver 1.0 Feb 9 09:56:00.728477 kernel: thunder_bgx, ver 1.0 Feb 9 09:56:00.728484 kernel: nicpf, ver 1.0 Feb 9 09:56:00.728490 kernel: nicvf, ver 1.0 Feb 9 09:56:00.728559 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:56:00.728621 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:56:00 UTC (1707472560) Feb 9 09:56:00.728630 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:56:00.728636 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:56:00.728643 kernel: Segment Routing with IPv6 Feb 9 09:56:00.728650 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:56:00.728656 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:56:00.728663 kernel: Key type dns_resolver registered Feb 9 09:56:00.728669 kernel: registered taskstats version 1 Feb 9 09:56:00.728677 kernel: Loading compiled-in X.509 certificates Feb 9 09:56:00.728684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:56:00.728691 kernel: Key type .fscrypt registered Feb 9 09:56:00.728697 kernel: Key type fscrypt-provisioning registered Feb 9 09:56:00.728703 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:56:00.728710 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:56:00.728716 kernel: ima: No architecture policies found Feb 9 09:56:00.728723 kernel: Freeing unused kernel memory: 34688K Feb 9 09:56:00.728729 kernel: Run /init as init process Feb 9 09:56:00.728737 kernel: with arguments: Feb 9 09:56:00.728744 kernel: /init Feb 9 09:56:00.728750 kernel: with environment: Feb 9 09:56:00.728756 kernel: HOME=/ Feb 9 09:56:00.728762 kernel: TERM=linux Feb 9 09:56:00.728769 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:56:00.728777 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:56:00.728787 systemd[1]: Detected virtualization kvm. Feb 9 09:56:00.728794 systemd[1]: Detected architecture arm64. Feb 9 09:56:00.728801 systemd[1]: Running in initrd. Feb 9 09:56:00.728808 systemd[1]: No hostname configured, using default hostname. Feb 9 09:56:00.728815 systemd[1]: Hostname set to . Feb 9 09:56:00.728822 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:56:00.728829 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:56:00.728836 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:56:00.728844 systemd[1]: Reached target cryptsetup.target. Feb 9 09:56:00.728851 systemd[1]: Reached target paths.target. Feb 9 09:56:00.728858 systemd[1]: Reached target slices.target. Feb 9 09:56:00.728864 systemd[1]: Reached target swap.target. Feb 9 09:56:00.728871 systemd[1]: Reached target timers.target. Feb 9 09:56:00.728879 systemd[1]: Listening on iscsid.socket. Feb 9 09:56:00.728885 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:56:00.728892 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:56:00.728900 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:56:00.728907 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:56:00.728914 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:56:00.728921 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:56:00.728928 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:56:00.728935 systemd[1]: Reached target sockets.target. Feb 9 09:56:00.728942 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:56:00.728948 systemd[1]: Finished network-cleanup.service. Feb 9 09:56:00.728955 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:56:00.728963 systemd[1]: Starting systemd-journald.service... Feb 9 09:56:00.728970 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:56:00.728977 systemd[1]: Starting systemd-resolved.service... Feb 9 09:56:00.728984 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:56:00.728991 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:56:00.728998 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:56:00.729005 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:56:00.729012 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:56:00.729021 systemd-journald[290]: Journal started Feb 9 09:56:00.729057 systemd-journald[290]: Runtime Journal (/run/log/journal/aade7e915da34808a7d6d383a1f9f62e) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:56:00.725229 systemd-modules-load[291]: Inserted module 'overlay' Feb 9 09:56:00.732483 systemd[1]: Started systemd-journald.service. Feb 9 09:56:00.732499 kernel: audit: type=1130 audit(1707472560.729:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.734923 kernel: audit: type=1130 audit(1707472560.732:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.733502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:56:00.738276 kernel: audit: type=1130 audit(1707472560.735:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.736539 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:56:00.744206 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:56:00.747370 kernel: Bridge firewalling registered Feb 9 09:56:00.745602 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 9 09:56:00.755615 kernel: SCSI subsystem initialized Feb 9 09:56:00.756680 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:56:00.761877 kernel: audit: type=1130 audit(1707472560.757:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.761894 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:56:00.761903 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:56:00.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.757381 systemd-resolved[292]: Positive Trust Anchors: Feb 9 09:56:00.763501 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:56:00.757388 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:56:00.757416 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:56:00.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.758157 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:56:00.761587 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 9 09:56:00.773495 kernel: audit: type=1130 audit(1707472560.766:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.764017 systemd[1]: Started systemd-resolved.service. Feb 9 09:56:00.765676 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 9 09:56:00.767276 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:56:00.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.777468 dracut-cmdline[307]: dracut-dracut-053 Feb 9 09:56:00.778411 kernel: audit: type=1130 audit(1707472560.774:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.775286 systemd[1]: Reached target nss-lookup.target. Feb 9 09:56:00.778589 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:00.780363 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:56:00.784863 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:00.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.788277 kernel: audit: type=1130 audit(1707472560.785:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.835213 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:56:00.843218 kernel: iscsi: registered transport (tcp) Feb 9 09:56:00.856226 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:56:00.856241 kernel: QLogic iSCSI HBA Driver Feb 9 09:56:00.889551 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:56:00.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.890998 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:56:00.893387 kernel: audit: type=1130 audit(1707472560.889:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:00.934213 kernel: raid6: neonx8 gen() 13728 MB/s Feb 9 09:56:00.951213 kernel: raid6: neonx8 xor() 10774 MB/s Feb 9 09:56:00.968208 kernel: raid6: neonx4 gen() 13423 MB/s Feb 9 09:56:00.985206 kernel: raid6: neonx4 xor() 11189 MB/s Feb 9 09:56:01.002209 kernel: raid6: neonx2 gen() 12859 MB/s Feb 9 09:56:01.019203 kernel: raid6: neonx2 xor() 10215 MB/s Feb 9 09:56:01.036204 kernel: raid6: neonx1 gen() 10469 MB/s Feb 9 09:56:01.053219 kernel: raid6: neonx1 xor() 8748 MB/s Feb 9 09:56:01.070216 kernel: raid6: int64x8 gen() 6269 MB/s Feb 9 09:56:01.087218 kernel: raid6: int64x8 xor() 3536 MB/s Feb 9 09:56:01.104205 kernel: raid6: int64x4 gen() 7223 MB/s Feb 9 09:56:01.121216 kernel: raid6: int64x4 xor() 3843 MB/s Feb 9 09:56:01.138209 kernel: raid6: int64x2 gen() 6137 MB/s Feb 9 09:56:01.155215 kernel: raid6: int64x2 xor() 3312 MB/s Feb 9 09:56:01.172215 kernel: raid6: int64x1 gen() 5030 MB/s Feb 9 09:56:01.189409 kernel: raid6: int64x1 xor() 2640 MB/s Feb 9 09:56:01.189430 kernel: raid6: using algorithm neonx8 gen() 13728 MB/s Feb 9 09:56:01.189447 kernel: raid6: .... xor() 10774 MB/s, rmw enabled Feb 9 09:56:01.189462 kernel: raid6: using neon recovery algorithm Feb 9 09:56:01.200483 kernel: xor: measuring software checksum speed Feb 9 09:56:01.200508 kernel: 8regs : 17297 MB/sec Feb 9 09:56:01.201313 kernel: 32regs : 20755 MB/sec Feb 9 09:56:01.202470 kernel: arm64_neon : 27997 MB/sec Feb 9 09:56:01.202481 kernel: xor: using function: arm64_neon (27997 MB/sec) Feb 9 09:56:01.256217 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:56:01.266385 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:56:01.269221 kernel: audit: type=1130 audit(1707472561.266:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:01.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:01.268000 audit: BPF prog-id=7 op=LOAD Feb 9 09:56:01.268000 audit: BPF prog-id=8 op=LOAD Feb 9 09:56:01.269621 systemd[1]: Starting systemd-udevd.service... Feb 9 09:56:01.285329 systemd-udevd[491]: Using default interface naming scheme 'v252'. Feb 9 09:56:01.288663 systemd[1]: Started systemd-udevd.service. Feb 9 09:56:01.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:01.290463 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:56:01.300940 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 9 09:56:01.325846 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:56:01.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:01.327334 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:56:01.359658 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:56:01.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:01.391195 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 09:56:01.393682 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:56:01.393717 kernel: GPT:9289727 != 19775487 Feb 9 09:56:01.393727 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:56:01.393736 kernel: GPT:9289727 != 19775487 Feb 9 09:56:01.399615 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:56:01.400220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:56:01.413625 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:56:01.418882 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:56:01.421879 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:56:01.423759 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (543) Feb 9 09:56:01.422892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:56:01.427380 systemd[1]: Starting disk-uuid.service... Feb 9 09:56:01.432319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:56:01.435382 disk-uuid[560]: Primary Header is updated. Feb 9 09:56:01.435382 disk-uuid[560]: Secondary Entries is updated. Feb 9 09:56:01.435382 disk-uuid[560]: Secondary Header is updated. Feb 9 09:56:01.438218 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:56:02.449596 disk-uuid[561]: The operation has completed successfully. Feb 9 09:56:02.450700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:56:02.471432 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:56:02.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.471525 systemd[1]: Finished disk-uuid.service. Feb 9 09:56:02.475517 systemd[1]: Starting verity-setup.service... Feb 9 09:56:02.493211 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:56:02.515144 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:56:02.517308 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:56:02.519377 systemd[1]: Finished verity-setup.service. Feb 9 09:56:02.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.565210 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:56:02.565614 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:56:02.566261 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:56:02.566940 systemd[1]: Starting ignition-setup.service... Feb 9 09:56:02.569183 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:56:02.575268 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:56:02.575305 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:56:02.575315 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:56:02.583953 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:56:02.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.589967 systemd[1]: Finished ignition-setup.service. Feb 9 09:56:02.591571 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:56:02.650633 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:56:02.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.652000 audit: BPF prog-id=9 op=LOAD Feb 9 09:56:02.652736 systemd[1]: Starting systemd-networkd.service... Feb 9 09:56:02.680343 systemd-networkd[736]: lo: Link UP Feb 9 09:56:02.680355 systemd-networkd[736]: lo: Gained carrier Feb 9 09:56:02.680733 systemd-networkd[736]: Enumeration completed Feb 9 09:56:02.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.680904 systemd-networkd[736]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:56:02.681914 systemd[1]: Started systemd-networkd.service. Feb 9 09:56:02.682856 systemd[1]: Reached target network.target. Feb 9 09:56:02.684604 systemd[1]: Starting iscsiuio.service... Feb 9 09:56:02.685737 systemd-networkd[736]: eth0: Link UP Feb 9 09:56:02.685741 systemd-networkd[736]: eth0: Gained carrier Feb 9 09:56:02.694015 ignition[649]: Ignition 2.14.0 Feb 9 09:56:02.694026 ignition[649]: Stage: fetch-offline Feb 9 09:56:02.694067 ignition[649]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:02.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.695526 systemd[1]: Started iscsiuio.service. Feb 9 09:56:02.694077 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:02.697799 systemd[1]: Starting iscsid.service... Feb 9 09:56:02.694234 ignition[649]: parsed url from cmdline: "" Feb 9 09:56:02.694238 ignition[649]: no config URL provided Feb 9 09:56:02.694243 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:56:02.702696 iscsid[743]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:56:02.702696 iscsid[743]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:56:02.702696 iscsid[743]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:56:02.702696 iscsid[743]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:56:02.702696 iscsid[743]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:56:02.702696 iscsid[743]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:56:02.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.694250 ignition[649]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:56:02.704243 systemd[1]: Started iscsid.service. Feb 9 09:56:02.694269 ignition[649]: op(1): [started] loading QEMU firmware config module Feb 9 09:56:02.709756 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:56:02.694274 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 09:56:02.713284 systemd-networkd[736]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:56:02.699699 ignition[649]: op(1): [finished] loading QEMU firmware config module Feb 9 09:56:02.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.719712 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:56:02.720605 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:56:02.722284 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:56:02.724045 systemd[1]: Reached target remote-fs.target. Feb 9 09:56:02.726315 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:56:02.733859 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:56:02.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.788173 ignition[649]: parsing config with SHA512: 8b4441e7a6eaf811fce9bb4637ed0f40a2dbf11ec0771d576381227557984ff16157b73b4f650a2333663b1e4b02727c73b5471bf9ec2f2086e7658aa08e5558 Feb 9 09:56:02.830287 unknown[649]: fetched base config from "system" Feb 9 09:56:02.830299 unknown[649]: fetched user config from "qemu" Feb 9 09:56:02.831703 ignition[649]: fetch-offline: fetch-offline passed Feb 9 09:56:02.831784 ignition[649]: Ignition finished successfully Feb 9 09:56:02.833834 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:56:02.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.834575 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:56:02.835340 systemd[1]: Starting ignition-kargs.service... Feb 9 09:56:02.844117 ignition[758]: Ignition 2.14.0 Feb 9 09:56:02.844127 ignition[758]: Stage: kargs Feb 9 09:56:02.844263 ignition[758]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:02.844273 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:02.845422 ignition[758]: kargs: kargs passed Feb 9 09:56:02.845468 ignition[758]: Ignition finished successfully Feb 9 09:56:02.848429 systemd[1]: Finished ignition-kargs.service. Feb 9 09:56:02.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.849750 systemd[1]: Starting ignition-disks.service... Feb 9 09:56:02.855822 ignition[764]: Ignition 2.14.0 Feb 9 09:56:02.855831 ignition[764]: Stage: disks Feb 9 09:56:02.855918 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:02.855927 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:02.858206 systemd[1]: Finished ignition-disks.service. Feb 9 09:56:02.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.857068 ignition[764]: disks: disks passed Feb 9 09:56:02.859486 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:56:02.857121 ignition[764]: Ignition finished successfully Feb 9 09:56:02.860421 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:56:02.861317 systemd[1]: Reached target local-fs.target. Feb 9 09:56:02.862287 systemd[1]: Reached target sysinit.target. Feb 9 09:56:02.863177 systemd[1]: Reached target basic.target. Feb 9 09:56:02.864827 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:56:02.875624 systemd-fsck[772]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:56:02.879066 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:56:02.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.882380 systemd[1]: Mounting sysroot.mount... Feb 9 09:56:02.888238 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:56:02.888600 systemd[1]: Mounted sysroot.mount. Feb 9 09:56:02.889313 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:56:02.893088 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:56:02.894018 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:56:02.894054 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:56:02.894077 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:56:02.895673 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:56:02.897106 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:56:02.901363 initrd-setup-root[782]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:56:02.907395 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:56:02.911517 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:56:02.915332 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:56:02.940806 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:56:02.942119 systemd[1]: Starting ignition-mount.service... Feb 9 09:56:02.943328 systemd[1]: Starting sysroot-boot.service... Feb 9 09:56:02.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.948744 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 09:56:02.957176 ignition[826]: INFO : Ignition 2.14.0 Feb 9 09:56:02.957176 ignition[826]: INFO : Stage: mount Feb 9 09:56:02.958690 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:02.958690 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:02.958690 ignition[826]: INFO : mount: mount passed Feb 9 09:56:02.958690 ignition[826]: INFO : Ignition finished successfully Feb 9 09:56:02.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:02.959245 systemd[1]: Finished ignition-mount.service. Feb 9 09:56:02.965818 systemd[1]: Finished sysroot-boot.service. Feb 9 09:56:02.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:03.526532 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:56:03.532218 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (834) Feb 9 09:56:03.533444 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:56:03.533467 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:56:03.533477 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:56:03.536588 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:56:03.538136 systemd[1]: Starting ignition-files.service... Feb 9 09:56:03.551897 ignition[854]: INFO : Ignition 2.14.0 Feb 9 09:56:03.551897 ignition[854]: INFO : Stage: files Feb 9 09:56:03.553142 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:03.553142 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:03.554883 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:56:03.559844 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:56:03.559844 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:56:03.562960 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:56:03.564049 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:56:03.564049 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:56:03.563679 unknown[854]: wrote ssh authorized keys file for user: core Feb 9 09:56:03.567222 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:56:03.567222 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:56:03.606794 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:56:03.650858 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:56:03.650858 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:56:03.653829 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:56:03.978861 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:56:04.291673 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:56:04.293826 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:56:04.293826 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:56:04.293826 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:56:04.520813 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:56:04.635308 systemd-networkd[736]: eth0: Gained IPv6LL Feb 9 09:56:04.638692 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:56:04.641041 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:56:04.641041 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:56:04.641041 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:56:04.641041 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:56:04.641041 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:56:04.711461 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:56:04.985573 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:56:04.987739 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:56:04.987739 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:56:04.987739 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:56:05.007066 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:56:05.332861 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:56:05.334993 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:56:05.334993 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:56:05.334993 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:56:05.354422 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 09:56:06.009384 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:56:06.011730 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:56:06.011730 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:56:06.011730 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:56:06.011730 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:56:06.011730 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 09:56:06.232544 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 09:56:06.277737 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:56:06.277737 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:56:06.280624 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:56:06.280624 ignition[854]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:56:06.280624 ignition[854]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:56:06.280624 ignition[854]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:56:06.280624 ignition[854]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:56:06.280624 ignition[854]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Feb 9 09:56:06.280624 ignition[854]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:56:06.280624 ignition[854]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:56:06.280624 ignition[854]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(17): [started] processing unit "coreos-metadata.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(17): op(18): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(17): op(18): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(17): [finished] processing unit "coreos-metadata.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(19): [started] processing unit "containerd.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(19): op(1a): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(19): op(1a): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(19): [finished] processing unit "containerd.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 09:56:06.303521 ignition[854]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:56:06.339573 ignition[854]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:56:06.341634 ignition[854]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 09:56:06.341634 ignition[854]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:56:06.341634 ignition[854]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:56:06.341634 ignition[854]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:56:06.341634 ignition[854]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:56:06.341634 ignition[854]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:56:06.341634 ignition[854]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:56:06.341634 ignition[854]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:56:06.341634 ignition[854]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:56:06.341634 ignition[854]: INFO : files: files passed Feb 9 09:56:06.341634 ignition[854]: INFO : Ignition finished successfully Feb 9 09:56:06.364793 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 09:56:06.364813 kernel: audit: type=1130 audit(1707472566.344:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.364828 kernel: audit: type=1130 audit(1707472566.353:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.364838 kernel: audit: type=1131 audit(1707472566.353:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.364847 kernel: audit: type=1130 audit(1707472566.358:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.342679 systemd[1]: Finished ignition-files.service. Feb 9 09:56:06.345166 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:56:06.366451 initrd-setup-root-after-ignition[878]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 09:56:06.348170 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:56:06.368802 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:56:06.348826 systemd[1]: Starting ignition-quench.service... Feb 9 09:56:06.352427 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:56:06.352509 systemd[1]: Finished ignition-quench.service. Feb 9 09:56:06.353424 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:56:06.358474 systemd[1]: Reached target ignition-complete.target. Feb 9 09:56:06.362660 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:56:06.374680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:56:06.374765 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:56:06.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.376207 systemd[1]: Reached target initrd-fs.target. Feb 9 09:56:06.381374 kernel: audit: type=1130 audit(1707472566.375:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.381391 kernel: audit: type=1131 audit(1707472566.375:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.380932 systemd[1]: Reached target initrd.target. Feb 9 09:56:06.382027 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:56:06.382725 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:56:06.392664 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:56:06.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.394083 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:56:06.396692 kernel: audit: type=1130 audit(1707472566.393:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.401518 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:56:06.402340 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:56:06.403526 systemd[1]: Stopped target timers.target. Feb 9 09:56:06.404642 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:56:06.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.404742 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:56:06.409019 kernel: audit: type=1131 audit(1707472566.405:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.405823 systemd[1]: Stopped target initrd.target. Feb 9 09:56:06.408661 systemd[1]: Stopped target basic.target. Feb 9 09:56:06.409575 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:56:06.410737 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:56:06.411878 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:56:06.413094 systemd[1]: Stopped target remote-fs.target. Feb 9 09:56:06.414233 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:56:06.415621 systemd[1]: Stopped target sysinit.target. Feb 9 09:56:06.416681 systemd[1]: Stopped target local-fs.target. Feb 9 09:56:06.417770 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:56:06.418902 systemd[1]: Stopped target swap.target. Feb 9 09:56:06.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.419927 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:56:06.424291 kernel: audit: type=1131 audit(1707472566.420:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.420030 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:56:06.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.421175 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:56:06.428279 kernel: audit: type=1131 audit(1707472566.424:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.423760 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:56:06.423858 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:56:06.425094 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:56:06.425208 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:56:06.427980 systemd[1]: Stopped target paths.target. Feb 9 09:56:06.428959 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:56:06.430231 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:56:06.431406 systemd[1]: Stopped target slices.target. Feb 9 09:56:06.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.432453 systemd[1]: Stopped target sockets.target. Feb 9 09:56:06.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.433505 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:56:06.433573 systemd[1]: Closed iscsid.socket. Feb 9 09:56:06.434711 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:56:06.434812 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:56:06.436061 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:56:06.436214 systemd[1]: Stopped ignition-files.service. Feb 9 09:56:06.437857 systemd[1]: Stopping ignition-mount.service... Feb 9 09:56:06.438663 systemd[1]: Stopping iscsiuio.service... Feb 9 09:56:06.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.440642 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:56:06.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.441297 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:56:06.447928 ignition[894]: INFO : Ignition 2.14.0 Feb 9 09:56:06.447928 ignition[894]: INFO : Stage: umount Feb 9 09:56:06.447928 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:06.447928 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:06.447928 ignition[894]: INFO : umount: umount passed Feb 9 09:56:06.447928 ignition[894]: INFO : Ignition finished successfully Feb 9 09:56:06.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.441426 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:56:06.442742 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:56:06.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.442834 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:56:06.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.445355 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:56:06.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.445444 systemd[1]: Stopped iscsiuio.service. Feb 9 09:56:06.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.446557 systemd[1]: Stopped target network.target. Feb 9 09:56:06.447362 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:56:06.447398 systemd[1]: Closed iscsiuio.socket. Feb 9 09:56:06.449336 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:56:06.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.450580 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:56:06.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.464000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:56:06.452093 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:56:06.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.452180 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:56:06.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.454440 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:56:06.454860 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:56:06.454926 systemd[1]: Stopped ignition-mount.service. Feb 9 09:56:06.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.455281 systemd-networkd[736]: eth0: DHCPv6 lease lost Feb 9 09:56:06.469000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:56:06.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.456900 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:56:06.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.456977 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:56:06.458082 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:56:06.458180 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:56:06.459295 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:56:06.459385 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:56:06.460906 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:56:06.460935 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:56:06.461983 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:56:06.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.462021 systemd[1]: Stopped ignition-disks.service. Feb 9 09:56:06.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.462959 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:56:06.462996 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:56:06.463931 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:56:06.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.463964 systemd[1]: Stopped ignition-setup.service. Feb 9 09:56:06.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.465015 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:56:06.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.465054 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:56:06.466868 systemd[1]: Stopping network-cleanup.service... Feb 9 09:56:06.467727 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:56:06.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.467780 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:56:06.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.469226 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:56:06.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.469275 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:56:06.470702 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:56:06.470745 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:56:06.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:06.471652 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:56:06.475641 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:56:06.478475 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:56:06.478596 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:56:06.479853 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:56:06.479928 systemd[1]: Stopped network-cleanup.service. Feb 9 09:56:06.480985 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:56:06.481017 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:56:06.481944 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:56:06.481975 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:56:06.483095 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:56:06.502000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:56:06.502000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:56:06.502000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:56:06.483142 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:56:06.484153 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:56:06.484187 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:56:06.485236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:56:06.485270 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:56:06.505000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:56:06.505000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:56:06.487062 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:56:06.488169 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:56:06.488274 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:56:06.489947 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:56:06.489983 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:56:06.490785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:56:06.490827 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:56:06.492640 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:56:06.493026 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:56:06.493115 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:56:06.494504 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:56:06.496166 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:56:06.501748 systemd[1]: Switching root. Feb 9 09:56:06.519418 iscsid[743]: iscsid shutting down. Feb 9 09:56:06.519907 systemd-journald[290]: Journal stopped Feb 9 09:56:08.647956 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 9 09:56:08.648032 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:56:08.648046 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:56:08.648056 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:56:08.648066 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:56:08.648093 kernel: SELinux: policy capability open_perms=1 Feb 9 09:56:08.648104 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:56:08.648114 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:56:08.648127 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:56:08.648140 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:56:08.648150 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:56:08.648159 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:56:08.648169 systemd[1]: Successfully loaded SELinux policy in 33.942ms. Feb 9 09:56:08.648188 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.890ms. Feb 9 09:56:08.648243 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:56:08.648256 systemd[1]: Detected virtualization kvm. Feb 9 09:56:08.648272 systemd[1]: Detected architecture arm64. Feb 9 09:56:08.648283 systemd[1]: Detected first boot. Feb 9 09:56:08.648293 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:56:08.648305 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:56:08.648315 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:56:08.648329 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:08.648340 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:08.648352 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:08.648364 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:56:08.648374 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 09:56:08.648385 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:56:08.648396 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:56:08.648407 systemd[1]: Created slice system-getty.slice. Feb 9 09:56:08.648417 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:56:08.648428 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:56:08.648438 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:56:08.648449 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:56:08.648459 systemd[1]: Created slice user.slice. Feb 9 09:56:08.648470 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:56:08.648481 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:56:08.648491 systemd[1]: Set up automount boot.automount. Feb 9 09:56:08.648503 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:56:08.648513 systemd[1]: Reached target integritysetup.target. Feb 9 09:56:08.648527 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:56:08.648537 systemd[1]: Reached target remote-fs.target. Feb 9 09:56:08.648548 systemd[1]: Reached target slices.target. Feb 9 09:56:08.648558 systemd[1]: Reached target swap.target. Feb 9 09:56:08.648569 systemd[1]: Reached target torcx.target. Feb 9 09:56:08.648580 systemd[1]: Reached target veritysetup.target. Feb 9 09:56:08.648591 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:56:08.648601 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:56:08.648612 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:56:08.648622 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:56:08.648634 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:56:08.648644 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:56:08.648654 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:56:08.648665 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:56:08.648675 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:56:08.648686 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:56:08.648698 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:56:08.648709 systemd[1]: Mounting media.mount... Feb 9 09:56:08.648719 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:56:08.648729 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:56:08.648740 systemd[1]: Mounting tmp.mount... Feb 9 09:56:08.648750 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:56:08.648760 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:56:08.648771 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:56:08.648781 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:56:08.648793 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:56:08.648803 systemd[1]: Starting modprobe@drm.service... Feb 9 09:56:08.648814 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:56:08.648824 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:56:08.648835 systemd[1]: Starting modprobe@loop.service... Feb 9 09:56:08.648845 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:56:08.648855 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:56:08.648865 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:56:08.648876 systemd[1]: Starting systemd-journald.service... Feb 9 09:56:08.648887 kernel: fuse: init (API version 7.34) Feb 9 09:56:08.648896 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:56:08.648906 kernel: loop: module loaded Feb 9 09:56:08.648918 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:56:08.648929 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:56:08.648939 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:56:08.648950 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:56:08.648960 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:56:08.648970 systemd[1]: Mounted media.mount. Feb 9 09:56:08.648980 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:56:08.648991 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:56:08.649003 systemd[1]: Mounted tmp.mount. Feb 9 09:56:08.649013 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:56:08.649024 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:56:08.649034 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:56:08.649045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:56:08.649055 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:56:08.649068 systemd-journald[1025]: Journal started Feb 9 09:56:08.649123 systemd-journald[1025]: Runtime Journal (/run/log/journal/aade7e915da34808a7d6d383a1f9f62e) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:56:08.567000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:56:08.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.643000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:56:08.643000 audit[1025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe4a8c990 a2=4000 a3=1 items=0 ppid=1 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:56:08.643000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:56:08.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.650232 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:56:08.652497 systemd[1]: Finished modprobe@drm.service. Feb 9 09:56:08.654229 systemd[1]: Started systemd-journald.service. Feb 9 09:56:08.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.654722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:56:08.654943 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:56:08.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.655940 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:56:08.656147 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:56:08.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.657146 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:56:08.657337 systemd[1]: Finished modprobe@loop.service. Feb 9 09:56:08.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.658361 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:56:08.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.659528 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:56:08.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.660783 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:56:08.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.661937 systemd[1]: Reached target network-pre.target. Feb 9 09:56:08.663874 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:56:08.665851 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:56:08.666581 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:56:08.668170 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:56:08.672728 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:56:08.673611 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:56:08.676819 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:56:08.677764 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:56:08.678963 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:08.683506 systemd-journald[1025]: Time spent on flushing to /var/log/journal/aade7e915da34808a7d6d383a1f9f62e is 13.155ms for 964 entries. Feb 9 09:56:08.683506 systemd-journald[1025]: System Journal (/var/log/journal/aade7e915da34808a7d6d383a1f9f62e) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:56:08.722282 systemd-journald[1025]: Received client request to flush runtime journal. Feb 9 09:56:08.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.681044 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:56:08.683305 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:56:08.722927 udevadm[1073]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:56:08.685684 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:56:08.687667 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:56:08.688814 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:56:08.690913 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:56:08.699037 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:08.700733 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:56:08.701695 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:56:08.714363 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:56:08.716307 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:56:08.723209 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:56:08.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:08.732958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:56:08.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.031710 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:56:09.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.033850 systemd[1]: Starting systemd-udevd.service... Feb 9 09:56:09.056424 systemd-udevd[1087]: Using default interface naming scheme 'v252'. Feb 9 09:56:09.069609 systemd[1]: Started systemd-udevd.service. Feb 9 09:56:09.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.071896 systemd[1]: Starting systemd-networkd.service... Feb 9 09:56:09.091356 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:56:09.094356 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 09:56:09.125082 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:56:09.126544 systemd[1]: Started systemd-userdbd.service. Feb 9 09:56:09.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.179631 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:56:09.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.181857 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:56:09.185012 systemd-networkd[1095]: lo: Link UP Feb 9 09:56:09.185021 systemd-networkd[1095]: lo: Gained carrier Feb 9 09:56:09.185371 systemd-networkd[1095]: Enumeration completed Feb 9 09:56:09.185494 systemd[1]: Started systemd-networkd.service. Feb 9 09:56:09.185774 systemd-networkd[1095]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:56:09.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.186960 systemd-networkd[1095]: eth0: Link UP Feb 9 09:56:09.186970 systemd-networkd[1095]: eth0: Gained carrier Feb 9 09:56:09.192503 lvm[1121]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:56:09.205343 systemd-networkd[1095]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:56:09.227019 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:56:09.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.227986 systemd[1]: Reached target cryptsetup.target. Feb 9 09:56:09.229693 systemd[1]: Starting lvm2-activation.service... Feb 9 09:56:09.233280 lvm[1123]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:56:09.260904 systemd[1]: Finished lvm2-activation.service. Feb 9 09:56:09.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.261622 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:56:09.262250 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:56:09.262277 systemd[1]: Reached target local-fs.target. Feb 9 09:56:09.262817 systemd[1]: Reached target machines.target. Feb 9 09:56:09.264447 systemd[1]: Starting ldconfig.service... Feb 9 09:56:09.265248 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:56:09.265306 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:09.266523 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:56:09.268347 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:56:09.270467 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:56:09.271254 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:56:09.271337 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:56:09.272558 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:56:09.273555 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1126 (bootctl) Feb 9 09:56:09.274791 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:56:09.277523 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:56:09.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.287231 systemd-tmpfiles[1129]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:56:09.288174 systemd-tmpfiles[1129]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:56:09.289698 systemd-tmpfiles[1129]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:56:09.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.391996 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:56:09.412258 systemd-fsck[1135]: fsck.fat 4.2 (2021-01-31) Feb 9 09:56:09.412258 systemd-fsck[1135]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 09:56:09.414616 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:56:09.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.467751 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:56:09.470615 systemd[1]: Finished ldconfig.service. Feb 9 09:56:09.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.635581 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:56:09.636992 systemd[1]: Mounting boot.mount... Feb 9 09:56:09.643486 systemd[1]: Mounted boot.mount. Feb 9 09:56:09.650243 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:56:09.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.700726 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:56:09.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.702888 systemd[1]: Starting audit-rules.service... Feb 9 09:56:09.704545 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:56:09.706283 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:56:09.708708 systemd[1]: Starting systemd-resolved.service... Feb 9 09:56:09.710917 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:56:09.712931 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:56:09.714373 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:56:09.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.715702 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:56:09.722000 audit[1156]: SYSTEM_BOOT pid=1156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.725408 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:56:09.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.737665 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:56:09.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.739705 systemd[1]: Starting systemd-update-done.service... Feb 9 09:56:09.748470 systemd[1]: Finished systemd-update-done.service. Feb 9 09:56:09.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:09.749000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:56:09.749000 audit[1168]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff41170a0 a2=420 a3=0 items=0 ppid=1144 pid=1168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:56:09.749000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:56:09.749720 augenrules[1168]: No rules Feb 9 09:56:09.750281 systemd[1]: Finished audit-rules.service. Feb 9 09:56:09.768333 systemd-resolved[1149]: Positive Trust Anchors: Feb 9 09:56:09.768345 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:56:09.768371 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:56:09.774268 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:56:09.775315 systemd[1]: Reached target time-set.target. Feb 9 09:56:09.345106 systemd-timesyncd[1150]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 09:56:09.363975 systemd-journald[1025]: Time jumped backwards, rotating. Feb 9 09:56:09.346019 systemd-timesyncd[1150]: Initial clock synchronization to Fri 2024-02-09 09:56:09.345020 UTC. Feb 9 09:56:09.351908 systemd-resolved[1149]: Defaulting to hostname 'linux'. Feb 9 09:56:09.353305 systemd[1]: Started systemd-resolved.service. Feb 9 09:56:09.354625 systemd[1]: Reached target network.target. Feb 9 09:56:09.355289 systemd[1]: Reached target nss-lookup.target. Feb 9 09:56:09.355845 systemd[1]: Reached target sysinit.target. Feb 9 09:56:09.356557 systemd[1]: Started motdgen.path. Feb 9 09:56:09.357083 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:56:09.358170 systemd[1]: Started logrotate.timer. Feb 9 09:56:09.358891 systemd[1]: Started mdadm.timer. Feb 9 09:56:09.359553 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:56:09.360237 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:56:09.360262 systemd[1]: Reached target paths.target. Feb 9 09:56:09.361020 systemd[1]: Reached target timers.target. Feb 9 09:56:09.362094 systemd[1]: Listening on dbus.socket. Feb 9 09:56:09.363978 systemd[1]: Starting docker.socket... Feb 9 09:56:09.365355 systemd[1]: Listening on sshd.socket. Feb 9 09:56:09.366006 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:09.366303 systemd[1]: Listening on docker.socket. Feb 9 09:56:09.366880 systemd[1]: Reached target sockets.target. Feb 9 09:56:09.367517 systemd[1]: Reached target basic.target. Feb 9 09:56:09.368199 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:56:09.368242 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:56:09.368261 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:56:09.369230 systemd[1]: Starting containerd.service... Feb 9 09:56:09.370756 systemd[1]: Starting dbus.service... Feb 9 09:56:09.372275 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:56:09.373921 systemd[1]: Starting extend-filesystems.service... Feb 9 09:56:09.374634 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:56:09.375823 systemd[1]: Starting motdgen.service... Feb 9 09:56:09.377557 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:56:09.379324 systemd[1]: Starting prepare-critools.service... Feb 9 09:56:09.381111 systemd[1]: Starting prepare-helm.service... Feb 9 09:56:09.382733 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:56:09.384517 systemd[1]: Starting sshd-keygen.service... Feb 9 09:56:09.387611 systemd[1]: Starting systemd-logind.service... Feb 9 09:56:09.390693 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:09.390763 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:56:09.396903 jq[1182]: false Feb 9 09:56:09.391822 systemd[1]: Starting update-engine.service... Feb 9 09:56:09.393461 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:56:09.396445 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:56:09.403750 jq[1201]: true Feb 9 09:56:09.396661 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:56:09.398173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:56:09.398374 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:56:09.414775 tar[1204]: ./ Feb 9 09:56:09.414775 tar[1204]: ./macvlan Feb 9 09:56:09.415066 tar[1205]: crictl Feb 9 09:56:09.415212 tar[1207]: linux-arm64/helm Feb 9 09:56:09.421906 jq[1214]: true Feb 9 09:56:09.429463 dbus-daemon[1181]: [system] SELinux support is enabled Feb 9 09:56:09.429625 systemd[1]: Started dbus.service. Feb 9 09:56:09.431905 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:56:09.431935 systemd[1]: Reached target system-config.target. Feb 9 09:56:09.432658 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:56:09.432678 systemd[1]: Reached target user-config.target. Feb 9 09:56:09.435754 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:56:09.436008 systemd[1]: Finished motdgen.service. Feb 9 09:56:09.444210 extend-filesystems[1183]: Found vda Feb 9 09:56:09.445141 extend-filesystems[1183]: Found vda1 Feb 9 09:56:09.445141 extend-filesystems[1183]: Found vda2 Feb 9 09:56:09.445141 extend-filesystems[1183]: Found vda3 Feb 9 09:56:09.445141 extend-filesystems[1183]: Found usr Feb 9 09:56:09.445141 extend-filesystems[1183]: Found vda4 Feb 9 09:56:09.445141 extend-filesystems[1183]: Found vda6 Feb 9 09:56:09.445141 extend-filesystems[1183]: Found vda7 Feb 9 09:56:09.445141 extend-filesystems[1183]: Found vda9 Feb 9 09:56:09.445141 extend-filesystems[1183]: Checking size of /dev/vda9 Feb 9 09:56:09.473086 extend-filesystems[1183]: Resized partition /dev/vda9 Feb 9 09:56:09.475344 extend-filesystems[1248]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:56:09.494401 systemd-logind[1197]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:56:09.495177 systemd-logind[1197]: New seat seat0. Feb 9 09:56:09.500941 systemd[1]: Started systemd-logind.service. Feb 9 09:56:09.504965 tar[1204]: ./static Feb 9 09:56:09.510636 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 09:56:09.510579 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:56:09.510774 bash[1244]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:56:09.538462 update_engine[1200]: I0209 09:56:09.538138 1200 main.cc:92] Flatcar Update Engine starting Feb 9 09:56:09.539005 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 09:56:09.545559 systemd[1]: Started update-engine.service. Feb 9 09:56:09.545674 update_engine[1200]: I0209 09:56:09.545603 1200 update_check_scheduler.cc:74] Next update check in 5m5s Feb 9 09:56:09.552270 systemd[1]: Started locksmithd.service. Feb 9 09:56:09.554286 extend-filesystems[1248]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 09:56:09.554286 extend-filesystems[1248]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:56:09.554286 extend-filesystems[1248]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 09:56:09.559065 extend-filesystems[1183]: Resized filesystem in /dev/vda9 Feb 9 09:56:09.555634 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:56:09.555861 systemd[1]: Finished extend-filesystems.service. Feb 9 09:56:09.569951 tar[1204]: ./vlan Feb 9 09:56:09.572218 env[1209]: time="2024-02-09T09:56:09.572151761Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:56:09.603902 tar[1204]: ./portmap Feb 9 09:56:09.634209 tar[1204]: ./host-local Feb 9 09:56:09.641787 env[1209]: time="2024-02-09T09:56:09.641738441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:56:09.641922 env[1209]: time="2024-02-09T09:56:09.641899961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:09.648786 env[1209]: time="2024-02-09T09:56:09.648745241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:09.648786 env[1209]: time="2024-02-09T09:56:09.648784681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:09.649096 env[1209]: time="2024-02-09T09:56:09.649067961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:09.649096 env[1209]: time="2024-02-09T09:56:09.649093761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:09.649164 env[1209]: time="2024-02-09T09:56:09.649107881Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:56:09.649164 env[1209]: time="2024-02-09T09:56:09.649118361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:09.649217 env[1209]: time="2024-02-09T09:56:09.649198041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:09.649506 env[1209]: time="2024-02-09T09:56:09.649476321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:09.649671 env[1209]: time="2024-02-09T09:56:09.649646521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:09.649671 env[1209]: time="2024-02-09T09:56:09.649667881Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:56:09.649742 env[1209]: time="2024-02-09T09:56:09.649724121Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:56:09.649742 env[1209]: time="2024-02-09T09:56:09.649740801Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:56:09.653482 env[1209]: time="2024-02-09T09:56:09.653449961Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:56:09.653536 env[1209]: time="2024-02-09T09:56:09.653484721Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:56:09.653536 env[1209]: time="2024-02-09T09:56:09.653498761Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:56:09.653536 env[1209]: time="2024-02-09T09:56:09.653528881Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.653589 env[1209]: time="2024-02-09T09:56:09.653543201Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.653589 env[1209]: time="2024-02-09T09:56:09.653556801Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.653589 env[1209]: time="2024-02-09T09:56:09.653569561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.653929 env[1209]: time="2024-02-09T09:56:09.653905081Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.653966 env[1209]: time="2024-02-09T09:56:09.653929521Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.653966 env[1209]: time="2024-02-09T09:56:09.653943601Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.653966 env[1209]: time="2024-02-09T09:56:09.653954561Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.654051 env[1209]: time="2024-02-09T09:56:09.653967161Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:56:09.654118 env[1209]: time="2024-02-09T09:56:09.654096561Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:56:09.654195 env[1209]: time="2024-02-09T09:56:09.654177601Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:56:09.654472 env[1209]: time="2024-02-09T09:56:09.654449841Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:56:09.654517 env[1209]: time="2024-02-09T09:56:09.654478721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654517 env[1209]: time="2024-02-09T09:56:09.654492161Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:56:09.654611 env[1209]: time="2024-02-09T09:56:09.654594361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654640 env[1209]: time="2024-02-09T09:56:09.654612561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654640 env[1209]: time="2024-02-09T09:56:09.654626001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654640 env[1209]: time="2024-02-09T09:56:09.654636441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654697 env[1209]: time="2024-02-09T09:56:09.654648081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654697 env[1209]: time="2024-02-09T09:56:09.654660241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654697 env[1209]: time="2024-02-09T09:56:09.654670841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654697 env[1209]: time="2024-02-09T09:56:09.654682401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654697 env[1209]: time="2024-02-09T09:56:09.654694601Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:56:09.654818 env[1209]: time="2024-02-09T09:56:09.654799841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654846 env[1209]: time="2024-02-09T09:56:09.654819281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654846 env[1209]: time="2024-02-09T09:56:09.654831961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.654846 env[1209]: time="2024-02-09T09:56:09.654843841Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:56:09.654920 env[1209]: time="2024-02-09T09:56:09.654858441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:56:09.654920 env[1209]: time="2024-02-09T09:56:09.654880641Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:56:09.654920 env[1209]: time="2024-02-09T09:56:09.654897041Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:56:09.654975 env[1209]: time="2024-02-09T09:56:09.654930201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:56:09.655218 env[1209]: time="2024-02-09T09:56:09.655142401Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.655222601Z" level=info msg="Connect containerd service" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.655255481Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.655834761Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656078761Z" level=info msg="Start subscribing containerd event" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656112321Z" level=info msg="Start recovering state" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656165961Z" level=info msg="Start event monitor" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656184081Z" level=info msg="Start snapshots syncer" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656193161Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656200641Z" level=info msg="Start streaming server" Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656537521Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656589961Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:56:09.658967 env[1209]: time="2024-02-09T09:56:09.656664241Z" level=info msg="containerd successfully booted in 0.109907s" Feb 9 09:56:09.657746 systemd[1]: Started containerd.service. Feb 9 09:56:09.666458 tar[1204]: ./vrf Feb 9 09:56:09.695513 tar[1204]: ./bridge Feb 9 09:56:09.729503 tar[1204]: ./tuning Feb 9 09:56:09.757677 tar[1204]: ./firewall Feb 9 09:56:09.792593 tar[1204]: ./host-device Feb 9 09:56:09.823954 tar[1204]: ./sbr Feb 9 09:56:09.852173 tar[1204]: ./loopback Feb 9 09:56:09.879818 tar[1204]: ./dhcp Feb 9 09:56:09.881885 tar[1207]: linux-arm64/LICENSE Feb 9 09:56:09.881962 tar[1207]: linux-arm64/README.md Feb 9 09:56:09.889154 systemd[1]: Finished prepare-helm.service. Feb 9 09:56:09.900123 systemd-networkd[1095]: eth0: Gained IPv6LL Feb 9 09:56:09.912733 systemd[1]: Finished prepare-critools.service. Feb 9 09:56:09.943699 locksmithd[1251]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:56:09.966136 tar[1204]: ./ptp Feb 9 09:56:09.999041 tar[1204]: ./ipvlan Feb 9 09:56:10.031281 tar[1204]: ./bandwidth Feb 9 09:56:10.074157 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:56:10.687521 sshd_keygen[1218]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:56:10.705072 systemd[1]: Finished sshd-keygen.service. Feb 9 09:56:10.707331 systemd[1]: Starting issuegen.service... Feb 9 09:56:10.711839 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:56:10.712062 systemd[1]: Finished issuegen.service. Feb 9 09:56:10.714108 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:56:10.719477 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:56:10.721536 systemd[1]: Started getty@tty1.service. Feb 9 09:56:10.723323 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:56:10.724193 systemd[1]: Reached target getty.target. Feb 9 09:56:10.724819 systemd[1]: Reached target multi-user.target. Feb 9 09:56:10.726651 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:56:10.732522 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:56:10.732715 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:56:10.733684 systemd[1]: Startup finished in 6.573s (kernel) + 4.595s (userspace) = 11.168s. Feb 9 09:56:12.296092 systemd[1]: Created slice system-sshd.slice. Feb 9 09:56:12.297268 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:60538.service. Feb 9 09:56:12.337941 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:56:12.339799 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:56:12.347150 systemd[1]: Created slice user-500.slice. Feb 9 09:56:12.348045 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:56:12.349947 systemd-logind[1197]: New session 1 of user core. Feb 9 09:56:12.355976 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:56:12.357116 systemd[1]: Starting user@500.service... Feb 9 09:56:12.359873 (systemd)[1297]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:56:12.417531 systemd[1297]: Queued start job for default target default.target. Feb 9 09:56:12.417725 systemd[1297]: Reached target paths.target. Feb 9 09:56:12.417740 systemd[1297]: Reached target sockets.target. Feb 9 09:56:12.417751 systemd[1297]: Reached target timers.target. Feb 9 09:56:12.417773 systemd[1297]: Reached target basic.target. Feb 9 09:56:12.417815 systemd[1297]: Reached target default.target. Feb 9 09:56:12.417835 systemd[1297]: Startup finished in 53ms. Feb 9 09:56:12.418131 systemd[1]: Started user@500.service. Feb 9 09:56:12.419038 systemd[1]: Started session-1.scope. Feb 9 09:56:12.468098 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:60542.service. Feb 9 09:56:12.501110 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 60542 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:56:12.502271 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:56:12.505513 systemd-logind[1197]: New session 2 of user core. Feb 9 09:56:12.506324 systemd[1]: Started session-2.scope. Feb 9 09:56:12.564040 sshd[1306]: pam_unix(sshd:session): session closed for user core Feb 9 09:56:12.566112 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:60558.service. Feb 9 09:56:12.569219 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:60542.service: Deactivated successfully. Feb 9 09:56:12.570165 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:56:12.570521 systemd-logind[1197]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:56:12.571389 systemd-logind[1197]: Removed session 2. Feb 9 09:56:12.600413 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 60558 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:56:12.601497 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:56:12.605028 systemd-logind[1197]: New session 3 of user core. Feb 9 09:56:12.605416 systemd[1]: Started session-3.scope. Feb 9 09:56:12.655496 sshd[1311]: pam_unix(sshd:session): session closed for user core Feb 9 09:56:12.657505 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:41004.service. Feb 9 09:56:12.658493 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:60558.service: Deactivated successfully. Feb 9 09:56:12.659307 systemd-logind[1197]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:56:12.659369 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:56:12.660369 systemd-logind[1197]: Removed session 3. Feb 9 09:56:12.691443 sshd[1318]: Accepted publickey for core from 10.0.0.1 port 41004 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:56:12.692478 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:56:12.695448 systemd-logind[1197]: New session 4 of user core. Feb 9 09:56:12.696195 systemd[1]: Started session-4.scope. Feb 9 09:56:12.749033 sshd[1318]: pam_unix(sshd:session): session closed for user core Feb 9 09:56:12.751097 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:41006.service. Feb 9 09:56:12.751511 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:41004.service: Deactivated successfully. Feb 9 09:56:12.752489 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:56:12.752567 systemd-logind[1197]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:56:12.755618 systemd-logind[1197]: Removed session 4. Feb 9 09:56:12.784763 sshd[1325]: Accepted publickey for core from 10.0.0.1 port 41006 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:56:12.785894 sshd[1325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:56:12.788946 systemd-logind[1197]: New session 5 of user core. Feb 9 09:56:12.790101 systemd[1]: Started session-5.scope. Feb 9 09:56:12.847813 sudo[1331]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:56:12.848414 sudo[1331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:56:13.814695 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:56:13.820276 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:56:13.820551 systemd[1]: Reached target network-online.target. Feb 9 09:56:13.821878 systemd[1]: Starting docker.service... Feb 9 09:56:13.905596 env[1350]: time="2024-02-09T09:56:13.905530041Z" level=info msg="Starting up" Feb 9 09:56:13.907141 env[1350]: time="2024-02-09T09:56:13.907116081Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:56:13.907226 env[1350]: time="2024-02-09T09:56:13.907212721Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:56:13.907288 env[1350]: time="2024-02-09T09:56:13.907271401Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:56:13.907359 env[1350]: time="2024-02-09T09:56:13.907343881Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:56:13.909653 env[1350]: time="2024-02-09T09:56:13.909627121Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:56:13.909736 env[1350]: time="2024-02-09T09:56:13.909722961Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:56:13.909795 env[1350]: time="2024-02-09T09:56:13.909779721Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:56:13.909876 env[1350]: time="2024-02-09T09:56:13.909846841Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:56:14.125088 env[1350]: time="2024-02-09T09:56:14.124977361Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 09:56:14.125088 env[1350]: time="2024-02-09T09:56:14.125016241Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 09:56:14.125250 env[1350]: time="2024-02-09T09:56:14.125131121Z" level=info msg="Loading containers: start." Feb 9 09:56:14.221018 kernel: Initializing XFRM netlink socket Feb 9 09:56:14.243654 env[1350]: time="2024-02-09T09:56:14.243618241Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:56:14.293073 systemd-networkd[1095]: docker0: Link UP Feb 9 09:56:14.300928 env[1350]: time="2024-02-09T09:56:14.300890161Z" level=info msg="Loading containers: done." Feb 9 09:56:14.323327 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3143362798-merged.mount: Deactivated successfully. Feb 9 09:56:14.327552 env[1350]: time="2024-02-09T09:56:14.327514161Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:56:14.327698 env[1350]: time="2024-02-09T09:56:14.327670761Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:56:14.327796 env[1350]: time="2024-02-09T09:56:14.327768401Z" level=info msg="Daemon has completed initialization" Feb 9 09:56:14.341552 systemd[1]: Started docker.service. Feb 9 09:56:14.351093 env[1350]: time="2024-02-09T09:56:14.350971201Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:56:14.367555 systemd[1]: Reloading. Feb 9 09:56:14.411044 /usr/lib/systemd/system-generators/torcx-generator[1492]: time="2024-02-09T09:56:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:14.411073 /usr/lib/systemd/system-generators/torcx-generator[1492]: time="2024-02-09T09:56:14Z" level=info msg="torcx already run" Feb 9 09:56:14.469814 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:14.469833 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:14.487046 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:14.541074 systemd[1]: Started kubelet.service. Feb 9 09:56:14.695451 kubelet[1534]: E0209 09:56:14.695325 1534 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:56:14.697374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:56:14.697548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:56:14.875495 env[1209]: time="2024-02-09T09:56:14.875450201Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:56:15.531833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112927926.mount: Deactivated successfully. Feb 9 09:56:17.102341 env[1209]: time="2024-02-09T09:56:17.102281681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:17.103639 env[1209]: time="2024-02-09T09:56:17.103593601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:17.105693 env[1209]: time="2024-02-09T09:56:17.105663841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:17.107311 env[1209]: time="2024-02-09T09:56:17.107281121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:17.108099 env[1209]: time="2024-02-09T09:56:17.108068521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:56:17.118205 env[1209]: time="2024-02-09T09:56:17.118175921Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:56:18.991767 env[1209]: time="2024-02-09T09:56:18.991721281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:18.993392 env[1209]: time="2024-02-09T09:56:18.993360201Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:18.995613 env[1209]: time="2024-02-09T09:56:18.995583001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:18.997294 env[1209]: time="2024-02-09T09:56:18.997260441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:18.998202 env[1209]: time="2024-02-09T09:56:18.998171001Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:56:19.007320 env[1209]: time="2024-02-09T09:56:19.007274441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:56:20.410806 env[1209]: time="2024-02-09T09:56:20.410762641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.413644 env[1209]: time="2024-02-09T09:56:20.413596121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.415045 env[1209]: time="2024-02-09T09:56:20.415019841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.419005 env[1209]: time="2024-02-09T09:56:20.418956881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.419883 env[1209]: time="2024-02-09T09:56:20.419855881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:56:20.428430 env[1209]: time="2024-02-09T09:56:20.428391921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:56:21.506786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92833368.mount: Deactivated successfully. Feb 9 09:56:21.835272 env[1209]: time="2024-02-09T09:56:21.835177001Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.836645 env[1209]: time="2024-02-09T09:56:21.836615321Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.840554 env[1209]: time="2024-02-09T09:56:21.840449281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.842747 env[1209]: time="2024-02-09T09:56:21.842719121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.843355 env[1209]: time="2024-02-09T09:56:21.843319921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:56:21.852188 env[1209]: time="2024-02-09T09:56:21.852157201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:56:22.283785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097158613.mount: Deactivated successfully. Feb 9 09:56:22.288159 env[1209]: time="2024-02-09T09:56:22.288116641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.289853 env[1209]: time="2024-02-09T09:56:22.289826921Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.291527 env[1209]: time="2024-02-09T09:56:22.291493521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.293128 env[1209]: time="2024-02-09T09:56:22.293098921Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:22.293695 env[1209]: time="2024-02-09T09:56:22.293654481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:56:22.303202 env[1209]: time="2024-02-09T09:56:22.303173081Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:56:23.061831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653150240.mount: Deactivated successfully. Feb 9 09:56:24.948342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:56:24.948517 systemd[1]: Stopped kubelet.service. Feb 9 09:56:24.950094 systemd[1]: Started kubelet.service. Feb 9 09:56:24.951530 env[1209]: time="2024-02-09T09:56:24.951489561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:24.952406 env[1209]: time="2024-02-09T09:56:24.952378161Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:24.954959 env[1209]: time="2024-02-09T09:56:24.954929841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:24.956095 env[1209]: time="2024-02-09T09:56:24.956065521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:24.957388 env[1209]: time="2024-02-09T09:56:24.957348681Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:56:24.965746 env[1209]: time="2024-02-09T09:56:24.965715641Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:56:25.001622 kubelet[1592]: E0209 09:56:25.001580 1592 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:56:25.004555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:56:25.004702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:56:25.524918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260192437.mount: Deactivated successfully. Feb 9 09:56:25.954771 env[1209]: time="2024-02-09T09:56:25.954658001Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:25.956245 env[1209]: time="2024-02-09T09:56:25.956216161Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:25.958139 env[1209]: time="2024-02-09T09:56:25.958115201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:25.962155 env[1209]: time="2024-02-09T09:56:25.962122401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:25.962799 env[1209]: time="2024-02-09T09:56:25.962766401Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:56:30.739623 systemd[1]: Stopped kubelet.service. Feb 9 09:56:30.753856 systemd[1]: Reloading. Feb 9 09:56:30.795829 /usr/lib/systemd/system-generators/torcx-generator[1698]: time="2024-02-09T09:56:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:30.795859 /usr/lib/systemd/system-generators/torcx-generator[1698]: time="2024-02-09T09:56:30Z" level=info msg="torcx already run" Feb 9 09:56:30.863668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:30.863686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:30.880462 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:30.940182 systemd[1]: Started kubelet.service. Feb 9 09:56:30.982193 kubelet[1743]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:30.982193 kubelet[1743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:30.982859 kubelet[1743]: I0209 09:56:30.982297 1743 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:30.983539 kubelet[1743]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:30.983539 kubelet[1743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:31.593156 kubelet[1743]: I0209 09:56:31.593119 1743 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:56:31.593156 kubelet[1743]: I0209 09:56:31.593148 1743 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:31.593468 kubelet[1743]: I0209 09:56:31.593443 1743 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:56:31.597599 kubelet[1743]: I0209 09:56:31.597580 1743 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:31.598127 kubelet[1743]: E0209 09:56:31.598113 1743 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.599586 kubelet[1743]: W0209 09:56:31.599568 1743 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:31.600383 kubelet[1743]: I0209 09:56:31.600364 1743 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:31.600832 kubelet[1743]: I0209 09:56:31.600810 1743 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:31.600892 kubelet[1743]: I0209 09:56:31.600879 1743 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:56:31.600975 kubelet[1743]: I0209 09:56:31.600962 1743 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:56:31.600975 kubelet[1743]: I0209 09:56:31.600973 1743 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:56:31.601169 kubelet[1743]: I0209 09:56:31.601142 1743 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:31.607260 kubelet[1743]: I0209 09:56:31.607240 1743 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:56:31.607385 kubelet[1743]: I0209 09:56:31.607372 1743 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:31.607650 kubelet[1743]: I0209 09:56:31.607638 1743 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:56:31.607712 kubelet[1743]: I0209 09:56:31.607703 1743 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:31.608239 kubelet[1743]: W0209 09:56:31.608197 1743 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.608298 kubelet[1743]: E0209 09:56:31.608245 1743 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.608298 kubelet[1743]: W0209 09:56:31.608269 1743 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.608298 kubelet[1743]: E0209 09:56:31.608295 1743 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.608752 kubelet[1743]: I0209 09:56:31.608739 1743 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:31.609663 kubelet[1743]: W0209 09:56:31.609646 1743 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:56:31.610300 kubelet[1743]: I0209 09:56:31.610281 1743 server.go:1186] "Started kubelet" Feb 9 09:56:31.611257 kubelet[1743]: E0209 09:56:31.610950 1743 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2294686a50299", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 56, 31, 610249881, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 56, 31, 610249881, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.79:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.79:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:56:31.611257 kubelet[1743]: E0209 09:56:31.611182 1743 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:31.611257 kubelet[1743]: E0209 09:56:31.611198 1743 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:31.611607 kubelet[1743]: I0209 09:56:31.611577 1743 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:31.612249 kubelet[1743]: I0209 09:56:31.612215 1743 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:56:31.613229 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:56:31.613468 kubelet[1743]: I0209 09:56:31.613444 1743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:31.614092 kubelet[1743]: I0209 09:56:31.613894 1743 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:56:31.614092 kubelet[1743]: I0209 09:56:31.614008 1743 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:56:31.614517 kubelet[1743]: W0209 09:56:31.614474 1743 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.614570 kubelet[1743]: E0209 09:56:31.614522 1743 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.614633 kubelet[1743]: E0209 09:56:31.614620 1743 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:56:31.615333 kubelet[1743]: E0209 09:56:31.615306 1743 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.644364 kubelet[1743]: I0209 09:56:31.644342 1743 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:31.644506 kubelet[1743]: I0209 09:56:31.644492 1743 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:31.644596 kubelet[1743]: I0209 09:56:31.644585 1743 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:31.646090 kubelet[1743]: I0209 09:56:31.646070 1743 policy_none.go:49] "None policy: Start" Feb 9 09:56:31.646586 kubelet[1743]: I0209 09:56:31.646570 1743 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:31.646647 kubelet[1743]: I0209 09:56:31.646603 1743 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:31.653949 kubelet[1743]: I0209 09:56:31.653922 1743 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:31.654175 kubelet[1743]: I0209 09:56:31.654161 1743 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:31.654923 kubelet[1743]: E0209 09:56:31.654905 1743 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 09:56:31.666762 kubelet[1743]: I0209 09:56:31.666744 1743 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:56:31.687564 kubelet[1743]: I0209 09:56:31.687542 1743 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:56:31.687677 kubelet[1743]: I0209 09:56:31.687666 1743 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:56:31.687744 kubelet[1743]: I0209 09:56:31.687734 1743 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:56:31.687870 kubelet[1743]: E0209 09:56:31.687856 1743 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:56:31.688221 kubelet[1743]: W0209 09:56:31.688175 1743 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.688289 kubelet[1743]: E0209 09:56:31.688229 1743 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.716417 kubelet[1743]: I0209 09:56:31.716393 1743 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:56:31.716814 kubelet[1743]: E0209 09:56:31.716779 1743 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Feb 9 09:56:31.788945 kubelet[1743]: I0209 09:56:31.788911 1743 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:31.789979 kubelet[1743]: I0209 09:56:31.789960 1743 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:31.790641 kubelet[1743]: I0209 09:56:31.790623 1743 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:31.791380 kubelet[1743]: I0209 09:56:31.791363 1743 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.79:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.79:6443: connect: connection refused" Feb 9 09:56:31.791773 kubelet[1743]: I0209 09:56:31.791751 1743 status_manager.go:698] "Failed to get status for pod" podUID=67936de1b8f8461c09888c9ae8e82050 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.79:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.79:6443: connect: connection refused" Feb 9 09:56:31.792205 kubelet[1743]: I0209 09:56:31.792189 1743 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.79:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.79:6443: connect: connection refused" Feb 9 09:56:31.814746 kubelet[1743]: I0209 09:56:31.814718 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:56:31.814806 kubelet[1743]: I0209 09:56:31.814754 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67936de1b8f8461c09888c9ae8e82050-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"67936de1b8f8461c09888c9ae8e82050\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:56:31.814806 kubelet[1743]: I0209 09:56:31.814777 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67936de1b8f8461c09888c9ae8e82050-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"67936de1b8f8461c09888c9ae8e82050\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:56:31.814854 kubelet[1743]: I0209 09:56:31.814837 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:31.814949 kubelet[1743]: I0209 09:56:31.814914 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:31.814991 kubelet[1743]: I0209 09:56:31.814958 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:31.815017 kubelet[1743]: I0209 09:56:31.814997 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67936de1b8f8461c09888c9ae8e82050-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"67936de1b8f8461c09888c9ae8e82050\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:56:31.815045 kubelet[1743]: I0209 09:56:31.815036 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:31.815104 kubelet[1743]: I0209 09:56:31.815087 1743 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:31.816418 kubelet[1743]: E0209 09:56:31.816392 1743 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:31.917931 kubelet[1743]: I0209 09:56:31.917859 1743 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:56:31.918849 kubelet[1743]: E0209 09:56:31.918788 1743 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Feb 9 09:56:32.097373 kubelet[1743]: E0209 09:56:32.097345 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:32.098698 env[1209]: time="2024-02-09T09:56:32.098394601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:32.099419 kubelet[1743]: E0209 09:56:32.099393 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:32.100242 kubelet[1743]: E0209 09:56:32.100218 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:32.100625 env[1209]: time="2024-02-09T09:56:32.100438321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:67936de1b8f8461c09888c9ae8e82050,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:32.101258 env[1209]: time="2024-02-09T09:56:32.101211681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:32.217594 kubelet[1743]: E0209 09:56:32.217490 1743 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:32.320181 kubelet[1743]: I0209 09:56:32.320156 1743 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:56:32.320475 kubelet[1743]: E0209 09:56:32.320460 1743 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Feb 9 09:56:32.457438 kubelet[1743]: W0209 09:56:32.457361 1743 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:32.457438 kubelet[1743]: E0209 09:56:32.457427 1743 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:32.563040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount719256228.mount: Deactivated successfully. Feb 9 09:56:32.568574 env[1209]: time="2024-02-09T09:56:32.568513041Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.569489 env[1209]: time="2024-02-09T09:56:32.569455641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.571046 env[1209]: time="2024-02-09T09:56:32.571009681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.573181 env[1209]: time="2024-02-09T09:56:32.573147561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.574545 env[1209]: time="2024-02-09T09:56:32.574513041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.575210 env[1209]: time="2024-02-09T09:56:32.575177761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.578211 env[1209]: time="2024-02-09T09:56:32.578184121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.580336 env[1209]: time="2024-02-09T09:56:32.580305761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.581883 env[1209]: time="2024-02-09T09:56:32.581858801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.585554 env[1209]: time="2024-02-09T09:56:32.585527161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.586354 env[1209]: time="2024-02-09T09:56:32.586326161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.587831 env[1209]: time="2024-02-09T09:56:32.587803881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:32.618749 env[1209]: time="2024-02-09T09:56:32.618575441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:32.618749 env[1209]: time="2024-02-09T09:56:32.618610361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:32.618749 env[1209]: time="2024-02-09T09:56:32.618620481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:32.618749 env[1209]: time="2024-02-09T09:56:32.618325121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:32.618749 env[1209]: time="2024-02-09T09:56:32.618523921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:32.618749 env[1209]: time="2024-02-09T09:56:32.618536641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:32.618970 env[1209]: time="2024-02-09T09:56:32.618916241Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a950f720469fcd6d06116fc7b72de2181e9c156cf19be49b31bd608b41db2da pid=1838 runtime=io.containerd.runc.v2 Feb 9 09:56:32.619129 env[1209]: time="2024-02-09T09:56:32.619087801Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de6ff73463a437a052c9ef1e72117bcaa05aa3bb8c2c3f0da481cfc63613f53f pid=1839 runtime=io.containerd.runc.v2 Feb 9 09:56:32.619498 env[1209]: time="2024-02-09T09:56:32.619434161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:32.619531 env[1209]: time="2024-02-09T09:56:32.619512921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:32.619563 env[1209]: time="2024-02-09T09:56:32.619539681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:32.619812 env[1209]: time="2024-02-09T09:56:32.619760361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4ea43ccb7e24614aa16ffcd296de99ce00310350490683ccc871abaa95f90dd pid=1835 runtime=io.containerd.runc.v2 Feb 9 09:56:32.714970 env[1209]: time="2024-02-09T09:56:32.714878401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"de6ff73463a437a052c9ef1e72117bcaa05aa3bb8c2c3f0da481cfc63613f53f\"" Feb 9 09:56:32.716339 kubelet[1743]: E0209 09:56:32.716315 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:32.717475 env[1209]: time="2024-02-09T09:56:32.717431721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:67936de1b8f8461c09888c9ae8e82050,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a950f720469fcd6d06116fc7b72de2181e9c156cf19be49b31bd608b41db2da\"" Feb 9 09:56:32.718249 kubelet[1743]: E0209 09:56:32.718154 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:32.721072 env[1209]: time="2024-02-09T09:56:32.719937561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4ea43ccb7e24614aa16ffcd296de99ce00310350490683ccc871abaa95f90dd\"" Feb 9 09:56:32.721072 env[1209]: time="2024-02-09T09:56:32.721057961Z" level=info msg="CreateContainer within sandbox \"de6ff73463a437a052c9ef1e72117bcaa05aa3bb8c2c3f0da481cfc63613f53f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:56:32.721155 kubelet[1743]: E0209 09:56:32.720342 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:32.721193 env[1209]: time="2024-02-09T09:56:32.720846481Z" level=info msg="CreateContainer within sandbox \"0a950f720469fcd6d06116fc7b72de2181e9c156cf19be49b31bd608b41db2da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:56:32.722127 env[1209]: time="2024-02-09T09:56:32.722093561Z" level=info msg="CreateContainer within sandbox \"d4ea43ccb7e24614aa16ffcd296de99ce00310350490683ccc871abaa95f90dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:56:32.736826 env[1209]: time="2024-02-09T09:56:32.736781241Z" level=info msg="CreateContainer within sandbox \"de6ff73463a437a052c9ef1e72117bcaa05aa3bb8c2c3f0da481cfc63613f53f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70f7039e373217854085360af84bba1dc473276b0857d0d96b84c2bcae5f3e43\"" Feb 9 09:56:32.737526 env[1209]: time="2024-02-09T09:56:32.737497041Z" level=info msg="StartContainer for \"70f7039e373217854085360af84bba1dc473276b0857d0d96b84c2bcae5f3e43\"" Feb 9 09:56:32.737599 env[1209]: time="2024-02-09T09:56:32.737564481Z" level=info msg="CreateContainer within sandbox \"0a950f720469fcd6d06116fc7b72de2181e9c156cf19be49b31bd608b41db2da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"13a6600c52fe552c8cdedfcb5322e7872ecb7dd596efcbe039b0ff43989a03cf\"" Feb 9 09:56:32.737934 env[1209]: time="2024-02-09T09:56:32.737907321Z" level=info msg="StartContainer for \"13a6600c52fe552c8cdedfcb5322e7872ecb7dd596efcbe039b0ff43989a03cf\"" Feb 9 09:56:32.740388 env[1209]: time="2024-02-09T09:56:32.740350601Z" level=info msg="CreateContainer within sandbox \"d4ea43ccb7e24614aa16ffcd296de99ce00310350490683ccc871abaa95f90dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8a144d821d5ff55afe85fefefc8297f9c13c598dd96ef76ef298905f03d2310\"" Feb 9 09:56:32.740745 env[1209]: time="2024-02-09T09:56:32.740720681Z" level=info msg="StartContainer for \"b8a144d821d5ff55afe85fefefc8297f9c13c598dd96ef76ef298905f03d2310\"" Feb 9 09:56:32.829965 env[1209]: time="2024-02-09T09:56:32.829174041Z" level=info msg="StartContainer for \"13a6600c52fe552c8cdedfcb5322e7872ecb7dd596efcbe039b0ff43989a03cf\" returns successfully" Feb 9 09:56:32.834895 env[1209]: time="2024-02-09T09:56:32.834847561Z" level=info msg="StartContainer for \"70f7039e373217854085360af84bba1dc473276b0857d0d96b84c2bcae5f3e43\" returns successfully" Feb 9 09:56:32.855932 env[1209]: time="2024-02-09T09:56:32.855596161Z" level=info msg="StartContainer for \"b8a144d821d5ff55afe85fefefc8297f9c13c598dd96ef76ef298905f03d2310\" returns successfully" Feb 9 09:56:32.874277 kubelet[1743]: W0209 09:56:32.872571 1743 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:32.874277 kubelet[1743]: E0209 09:56:32.872628 1743 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:33.018939 kubelet[1743]: E0209 09:56:33.018863 1743 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.79:6443: connect: connection refused Feb 9 09:56:33.122380 kubelet[1743]: I0209 09:56:33.122290 1743 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:56:33.694643 kubelet[1743]: E0209 09:56:33.694615 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:33.697345 kubelet[1743]: E0209 09:56:33.697317 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:33.699467 kubelet[1743]: E0209 09:56:33.699443 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:34.701076 kubelet[1743]: E0209 09:56:34.701047 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:34.701387 kubelet[1743]: E0209 09:56:34.701281 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:34.703999 kubelet[1743]: E0209 09:56:34.701905 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:35.702618 kubelet[1743]: E0209 09:56:35.702575 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:36.270470 kubelet[1743]: E0209 09:56:36.270442 1743 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 09:56:36.315297 kubelet[1743]: I0209 09:56:36.315266 1743 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:56:36.610807 kubelet[1743]: I0209 09:56:36.610702 1743 apiserver.go:52] "Watching apiserver" Feb 9 09:56:36.614694 kubelet[1743]: I0209 09:56:36.614652 1743 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:56:36.643268 kubelet[1743]: I0209 09:56:36.643225 1743 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:56:36.682359 kubelet[1743]: E0209 09:56:36.682334 1743 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:36.682835 kubelet[1743]: E0209 09:56:36.682819 1743 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:39.022501 systemd[1]: Reloading. Feb 9 09:56:39.064218 /usr/lib/systemd/system-generators/torcx-generator[2080]: time="2024-02-09T09:56:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:39.064249 /usr/lib/systemd/system-generators/torcx-generator[2080]: time="2024-02-09T09:56:39Z" level=info msg="torcx already run" Feb 9 09:56:39.125410 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:39.125430 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:39.142279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:39.207579 systemd[1]: Stopping kubelet.service... Feb 9 09:56:39.207716 kubelet[1743]: I0209 09:56:39.207582 1743 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:39.228326 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:56:39.228660 systemd[1]: Stopped kubelet.service. Feb 9 09:56:39.230297 systemd[1]: Started kubelet.service. Feb 9 09:56:39.288231 kubelet[2125]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:39.288231 kubelet[2125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:39.288231 kubelet[2125]: I0209 09:56:39.288209 2125 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:39.289486 kubelet[2125]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:39.289486 kubelet[2125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:39.292416 kubelet[2125]: I0209 09:56:39.292387 2125 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:56:39.292416 kubelet[2125]: I0209 09:56:39.292411 2125 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:39.292641 kubelet[2125]: I0209 09:56:39.292615 2125 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:56:39.293875 kubelet[2125]: I0209 09:56:39.293824 2125 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:56:39.294806 kubelet[2125]: I0209 09:56:39.294788 2125 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:39.296863 kubelet[2125]: W0209 09:56:39.296841 2125 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:39.297810 kubelet[2125]: I0209 09:56:39.297784 2125 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:39.298228 kubelet[2125]: I0209 09:56:39.298210 2125 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:39.298295 kubelet[2125]: I0209 09:56:39.298281 2125 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:56:39.298363 kubelet[2125]: I0209 09:56:39.298303 2125 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:56:39.298363 kubelet[2125]: I0209 09:56:39.298319 2125 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:56:39.298363 kubelet[2125]: I0209 09:56:39.298347 2125 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:39.302349 kubelet[2125]: I0209 09:56:39.302332 2125 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:56:39.302349 kubelet[2125]: I0209 09:56:39.302354 2125 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:39.302445 kubelet[2125]: I0209 09:56:39.302380 2125 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:56:39.304032 kubelet[2125]: I0209 09:56:39.303020 2125 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:39.305143 kubelet[2125]: I0209 09:56:39.305122 2125 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:39.305708 kubelet[2125]: I0209 09:56:39.305687 2125 server.go:1186] "Started kubelet" Feb 9 09:56:39.306072 kubelet[2125]: I0209 09:56:39.306046 2125 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:39.306825 kubelet[2125]: E0209 09:56:39.306678 2125 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:39.306825 kubelet[2125]: E0209 09:56:39.306718 2125 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:39.307102 kubelet[2125]: I0209 09:56:39.307084 2125 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:56:39.307846 kubelet[2125]: I0209 09:56:39.307817 2125 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:39.307918 kubelet[2125]: I0209 09:56:39.307880 2125 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:56:39.308017 kubelet[2125]: E0209 09:56:39.308000 2125 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:56:39.313252 kubelet[2125]: I0209 09:56:39.313229 2125 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:56:39.357947 kubelet[2125]: I0209 09:56:39.357923 2125 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:56:39.369832 kubelet[2125]: I0209 09:56:39.369811 2125 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:56:39.370004 kubelet[2125]: I0209 09:56:39.369979 2125 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:56:39.370074 kubelet[2125]: I0209 09:56:39.370063 2125 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:56:39.370206 kubelet[2125]: E0209 09:56:39.370195 2125 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:56:39.377843 kubelet[2125]: I0209 09:56:39.377819 2125 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:39.377843 kubelet[2125]: I0209 09:56:39.377838 2125 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:39.377937 kubelet[2125]: I0209 09:56:39.377853 2125 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:39.378011 kubelet[2125]: I0209 09:56:39.377977 2125 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:56:39.378064 kubelet[2125]: I0209 09:56:39.378015 2125 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:56:39.378064 kubelet[2125]: I0209 09:56:39.378029 2125 policy_none.go:49] "None policy: Start" Feb 9 09:56:39.378555 kubelet[2125]: I0209 09:56:39.378532 2125 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:39.378555 kubelet[2125]: I0209 09:56:39.378555 2125 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:39.378722 kubelet[2125]: I0209 09:56:39.378688 2125 state_mem.go:75] "Updated machine memory state" Feb 9 09:56:39.379785 kubelet[2125]: I0209 09:56:39.379759 2125 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:39.379993 kubelet[2125]: I0209 09:56:39.379965 2125 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:39.406625 sudo[2178]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:56:39.406842 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:56:39.411686 kubelet[2125]: I0209 09:56:39.411305 2125 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:56:39.419700 kubelet[2125]: I0209 09:56:39.419677 2125 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 09:56:39.419869 kubelet[2125]: I0209 09:56:39.419855 2125 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:56:39.470784 kubelet[2125]: I0209 09:56:39.470750 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:39.470979 kubelet[2125]: I0209 09:56:39.470964 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:39.473451 kubelet[2125]: I0209 09:56:39.473426 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:39.515665 kubelet[2125]: I0209 09:56:39.515633 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67936de1b8f8461c09888c9ae8e82050-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"67936de1b8f8461c09888c9ae8e82050\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:56:39.515756 kubelet[2125]: I0209 09:56:39.515675 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:39.515756 kubelet[2125]: I0209 09:56:39.515699 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:39.515829 kubelet[2125]: I0209 09:56:39.515747 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:39.515829 kubelet[2125]: I0209 09:56:39.515792 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:39.515876 kubelet[2125]: I0209 09:56:39.515833 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:56:39.515876 kubelet[2125]: I0209 09:56:39.515854 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67936de1b8f8461c09888c9ae8e82050-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"67936de1b8f8461c09888c9ae8e82050\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:56:39.515876 kubelet[2125]: I0209 09:56:39.515874 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67936de1b8f8461c09888c9ae8e82050-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"67936de1b8f8461c09888c9ae8e82050\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:56:39.515940 kubelet[2125]: I0209 09:56:39.515903 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:39.709586 kubelet[2125]: E0209 09:56:39.706907 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:39.775486 kubelet[2125]: E0209 09:56:39.775463 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:39.848807 sudo[2178]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:39.908916 kubelet[2125]: E0209 09:56:39.908787 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:40.304938 kubelet[2125]: I0209 09:56:40.304904 2125 apiserver.go:52] "Watching apiserver" Feb 9 09:56:40.313845 kubelet[2125]: I0209 09:56:40.313822 2125 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:56:40.321967 kubelet[2125]: I0209 09:56:40.321944 2125 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:56:40.378384 kubelet[2125]: E0209 09:56:40.378348 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:40.707566 kubelet[2125]: E0209 09:56:40.707463 2125 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 09:56:40.707901 kubelet[2125]: E0209 09:56:40.707879 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:40.906994 kubelet[2125]: E0209 09:56:40.906954 2125 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 09:56:40.907277 kubelet[2125]: E0209 09:56:40.907255 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:41.115628 kubelet[2125]: I0209 09:56:41.115590 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.115554121 pod.CreationTimestamp="2024-02-09 09:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:41.115005281 +0000 UTC m=+1.881287041" watchObservedRunningTime="2024-02-09 09:56:41.115554121 +0000 UTC m=+1.881835881" Feb 9 09:56:41.378869 kubelet[2125]: E0209 09:56:41.378770 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:41.379228 kubelet[2125]: E0209 09:56:41.379139 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:41.379627 kubelet[2125]: E0209 09:56:41.379597 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:41.908302 kubelet[2125]: I0209 09:56:41.908268 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.908225841 pod.CreationTimestamp="2024-02-09 09:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:41.907968401 +0000 UTC m=+2.674250161" watchObservedRunningTime="2024-02-09 09:56:41.908225841 +0000 UTC m=+2.674507601" Feb 9 09:56:41.908471 kubelet[2125]: I0209 09:56:41.908343 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.908325921 pod.CreationTimestamp="2024-02-09 09:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:41.509055321 +0000 UTC m=+2.275337081" watchObservedRunningTime="2024-02-09 09:56:41.908325921 +0000 UTC m=+2.674607681" Feb 9 09:56:41.970957 sudo[1331]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:41.973270 sshd[1325]: pam_unix(sshd:session): session closed for user core Feb 9 09:56:41.976088 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:41006.service: Deactivated successfully. Feb 9 09:56:41.977177 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:56:41.977540 systemd-logind[1197]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:56:41.978290 systemd-logind[1197]: Removed session 5. Feb 9 09:56:44.412457 kubelet[2125]: E0209 09:56:44.412414 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:45.384869 kubelet[2125]: E0209 09:56:45.384838 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:47.481255 kubelet[2125]: E0209 09:56:47.481226 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:47.534478 kubelet[2125]: E0209 09:56:47.534423 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:48.388147 kubelet[2125]: E0209 09:56:48.388108 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:48.389867 kubelet[2125]: E0209 09:56:48.389850 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:49.388998 kubelet[2125]: E0209 09:56:49.388958 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:52.883858 kubelet[2125]: I0209 09:56:52.883832 2125 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:56:52.884237 env[1209]: time="2024-02-09T09:56:52.884170305Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:56:52.884658 kubelet[2125]: I0209 09:56:52.884639 2125 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:56:53.779598 kubelet[2125]: I0209 09:56:53.779504 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:53.788247 kubelet[2125]: I0209 09:56:53.788214 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:53.816472 kubelet[2125]: I0209 09:56:53.816440 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39c9e22e-f0e3-453a-8efc-d58c7fdd38d5-lib-modules\") pod \"kube-proxy-pwp2n\" (UID: \"39c9e22e-f0e3-453a-8efc-d58c7fdd38d5\") " pod="kube-system/kube-proxy-pwp2n" Feb 9 09:56:53.816472 kubelet[2125]: I0209 09:56:53.816483 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-run\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816641 kubelet[2125]: I0209 09:56:53.816507 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-etc-cni-netd\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816641 kubelet[2125]: I0209 09:56:53.816528 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47d3f462-acfb-483f-b53f-b918c2d86b4a-clustermesh-secrets\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816641 kubelet[2125]: I0209 09:56:53.816549 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86cwv\" (UniqueName: \"kubernetes.io/projected/39c9e22e-f0e3-453a-8efc-d58c7fdd38d5-kube-api-access-86cwv\") pod \"kube-proxy-pwp2n\" (UID: \"39c9e22e-f0e3-453a-8efc-d58c7fdd38d5\") " pod="kube-system/kube-proxy-pwp2n" Feb 9 09:56:53.816641 kubelet[2125]: I0209 09:56:53.816571 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39c9e22e-f0e3-453a-8efc-d58c7fdd38d5-xtables-lock\") pod \"kube-proxy-pwp2n\" (UID: \"39c9e22e-f0e3-453a-8efc-d58c7fdd38d5\") " pod="kube-system/kube-proxy-pwp2n" Feb 9 09:56:53.816641 kubelet[2125]: I0209 09:56:53.816590 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-config-path\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816762 kubelet[2125]: I0209 09:56:53.816610 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-bpf-maps\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816762 kubelet[2125]: I0209 09:56:53.816630 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-lib-modules\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816762 kubelet[2125]: I0209 09:56:53.816649 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-host-proc-sys-net\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816762 kubelet[2125]: I0209 09:56:53.816667 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47d3f462-acfb-483f-b53f-b918c2d86b4a-hubble-tls\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816762 kubelet[2125]: I0209 09:56:53.816688 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-cgroup\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816762 kubelet[2125]: I0209 09:56:53.816708 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cni-path\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816890 kubelet[2125]: I0209 09:56:53.816728 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-xtables-lock\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816890 kubelet[2125]: I0209 09:56:53.816748 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-host-proc-sys-kernel\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816890 kubelet[2125]: I0209 09:56:53.816778 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39c9e22e-f0e3-453a-8efc-d58c7fdd38d5-kube-proxy\") pod \"kube-proxy-pwp2n\" (UID: \"39c9e22e-f0e3-453a-8efc-d58c7fdd38d5\") " pod="kube-system/kube-proxy-pwp2n" Feb 9 09:56:53.816890 kubelet[2125]: I0209 09:56:53.816799 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-hostproc\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.816890 kubelet[2125]: I0209 09:56:53.816822 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbpjd\" (UniqueName: \"kubernetes.io/projected/47d3f462-acfb-483f-b53f-b918c2d86b4a-kube-api-access-lbpjd\") pod \"cilium-z4rgt\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " pod="kube-system/cilium-z4rgt" Feb 9 09:56:53.876715 kubelet[2125]: I0209 09:56:53.876658 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:53.918147 kubelet[2125]: I0209 09:56:53.918060 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s66l5\" (UniqueName: \"kubernetes.io/projected/36fe4c5c-672b-4a1c-a61e-0f299f1d6041-kube-api-access-s66l5\") pod \"cilium-operator-f59cbd8c6-zvgp5\" (UID: \"36fe4c5c-672b-4a1c-a61e-0f299f1d6041\") " pod="kube-system/cilium-operator-f59cbd8c6-zvgp5" Feb 9 09:56:53.918147 kubelet[2125]: I0209 09:56:53.918126 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36fe4c5c-672b-4a1c-a61e-0f299f1d6041-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-zvgp5\" (UID: \"36fe4c5c-672b-4a1c-a61e-0f299f1d6041\") " pod="kube-system/cilium-operator-f59cbd8c6-zvgp5" Feb 9 09:56:54.390811 kubelet[2125]: E0209 09:56:54.390771 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:54.394561 env[1209]: time="2024-02-09T09:56:54.392186843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z4rgt,Uid:47d3f462-acfb-483f-b53f-b918c2d86b4a,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:54.405010 env[1209]: time="2024-02-09T09:56:54.404916631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:54.405010 env[1209]: time="2024-02-09T09:56:54.404962830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:54.405010 env[1209]: time="2024-02-09T09:56:54.404976150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:54.405308 env[1209]: time="2024-02-09T09:56:54.405242666Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25 pid=2242 runtime=io.containerd.runc.v2 Feb 9 09:56:54.446289 env[1209]: time="2024-02-09T09:56:54.446248985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z4rgt,Uid:47d3f462-acfb-483f-b53f-b918c2d86b4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\"" Feb 9 09:56:54.447154 kubelet[2125]: E0209 09:56:54.447134 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:54.450103 env[1209]: time="2024-02-09T09:56:54.450033962Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:56:54.568002 update_engine[1200]: I0209 09:56:54.567931 1200 update_attempter.cc:509] Updating boot flags... Feb 9 09:56:54.682854 kubelet[2125]: E0209 09:56:54.682512 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:54.683169 env[1209]: time="2024-02-09T09:56:54.683090731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwp2n,Uid:39c9e22e-f0e3-453a-8efc-d58c7fdd38d5,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:54.693642 env[1209]: time="2024-02-09T09:56:54.693569397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:54.693642 env[1209]: time="2024-02-09T09:56:54.693611996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:54.693642 env[1209]: time="2024-02-09T09:56:54.693622316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:54.693801 env[1209]: time="2024-02-09T09:56:54.693748954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46ac0fe917df1141cf87dc3d32c9396bcd0216e787c37df591fd8f8a5ecc4676 pid=2299 runtime=io.containerd.runc.v2 Feb 9 09:56:54.735391 env[1209]: time="2024-02-09T09:56:54.735345223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwp2n,Uid:39c9e22e-f0e3-453a-8efc-d58c7fdd38d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ac0fe917df1141cf87dc3d32c9396bcd0216e787c37df591fd8f8a5ecc4676\"" Feb 9 09:56:54.736226 kubelet[2125]: E0209 09:56:54.735888 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:54.737632 env[1209]: time="2024-02-09T09:56:54.737601186Z" level=info msg="CreateContainer within sandbox \"46ac0fe917df1141cf87dc3d32c9396bcd0216e787c37df591fd8f8a5ecc4676\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:56:54.752934 env[1209]: time="2024-02-09T09:56:54.752887692Z" level=info msg="CreateContainer within sandbox \"46ac0fe917df1141cf87dc3d32c9396bcd0216e787c37df591fd8f8a5ecc4676\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e9aeb860d5dba43634b7a85e50d0fefc9682142b1795bf3a83c8cd45c9bdff5\"" Feb 9 09:56:54.754149 env[1209]: time="2024-02-09T09:56:54.754119311Z" level=info msg="StartContainer for \"3e9aeb860d5dba43634b7a85e50d0fefc9682142b1795bf3a83c8cd45c9bdff5\"" Feb 9 09:56:54.784886 kubelet[2125]: E0209 09:56:54.781345 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:54.785037 env[1209]: time="2024-02-09T09:56:54.781930449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-zvgp5,Uid:36fe4c5c-672b-4a1c-a61e-0f299f1d6041,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:54.794976 env[1209]: time="2024-02-09T09:56:54.794621959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:54.794976 env[1209]: time="2024-02-09T09:56:54.794662598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:54.794976 env[1209]: time="2024-02-09T09:56:54.794687278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:54.794976 env[1209]: time="2024-02-09T09:56:54.794840635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0 pid=2364 runtime=io.containerd.runc.v2 Feb 9 09:56:54.814299 env[1209]: time="2024-02-09T09:56:54.810614693Z" level=info msg="StartContainer for \"3e9aeb860d5dba43634b7a85e50d0fefc9682142b1795bf3a83c8cd45c9bdff5\" returns successfully" Feb 9 09:56:54.870934 env[1209]: time="2024-02-09T09:56:54.868546091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-zvgp5,Uid:36fe4c5c-672b-4a1c-a61e-0f299f1d6041,Namespace:kube-system,Attempt:0,} returns sandbox id \"819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0\"" Feb 9 09:56:54.871064 kubelet[2125]: E0209 09:56:54.869224 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:55.401160 kubelet[2125]: E0209 09:56:55.401115 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:55.407618 kubelet[2125]: I0209 09:56:55.407575 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pwp2n" podStartSLOduration=2.407543761 pod.CreationTimestamp="2024-02-09 09:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:55.407098728 +0000 UTC m=+16.173380488" watchObservedRunningTime="2024-02-09 09:56:55.407543761 +0000 UTC m=+16.173825481" Feb 9 09:56:56.402169 kubelet[2125]: E0209 09:56:56.402125 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:56:57.889464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184455048.mount: Deactivated successfully. Feb 9 09:57:00.130926 env[1209]: time="2024-02-09T09:57:00.130876074Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:00.132017 env[1209]: time="2024-02-09T09:57:00.131961782Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:00.133386 env[1209]: time="2024-02-09T09:57:00.133347526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:00.134169 env[1209]: time="2024-02-09T09:57:00.134137717Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:57:00.134977 env[1209]: time="2024-02-09T09:57:00.134849509Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:57:00.138493 env[1209]: time="2024-02-09T09:57:00.138457469Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:57:00.148488 env[1209]: time="2024-02-09T09:57:00.148452796Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\"" Feb 9 09:57:00.149091 env[1209]: time="2024-02-09T09:57:00.148995470Z" level=info msg="StartContainer for \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\"" Feb 9 09:57:00.346840 env[1209]: time="2024-02-09T09:57:00.346757720Z" level=info msg="StartContainer for \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\" returns successfully" Feb 9 09:57:00.374392 env[1209]: time="2024-02-09T09:57:00.374344609Z" level=info msg="shim disconnected" id=194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033 Feb 9 09:57:00.374392 env[1209]: time="2024-02-09T09:57:00.374391128Z" level=warning msg="cleaning up after shim disconnected" id=194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033 namespace=k8s.io Feb 9 09:57:00.374392 env[1209]: time="2024-02-09T09:57:00.374401288Z" level=info msg="cleaning up dead shim" Feb 9 09:57:00.381467 env[1209]: time="2024-02-09T09:57:00.381362610Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:57:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2564 runtime=io.containerd.runc.v2\n" Feb 9 09:57:00.409488 kubelet[2125]: E0209 09:57:00.409458 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:00.416263 env[1209]: time="2024-02-09T09:57:00.416226296Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:57:00.429925 env[1209]: time="2024-02-09T09:57:00.429881422Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\"" Feb 9 09:57:00.431908 env[1209]: time="2024-02-09T09:57:00.430663694Z" level=info msg="StartContainer for \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\"" Feb 9 09:57:00.492583 env[1209]: time="2024-02-09T09:57:00.492538676Z" level=info msg="StartContainer for \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\" returns successfully" Feb 9 09:57:00.496350 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:57:00.496603 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:57:00.497166 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:57:00.498735 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:57:00.508324 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:57:00.517865 env[1209]: time="2024-02-09T09:57:00.517818671Z" level=info msg="shim disconnected" id=82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354 Feb 9 09:57:00.517865 env[1209]: time="2024-02-09T09:57:00.517867030Z" level=warning msg="cleaning up after shim disconnected" id=82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354 namespace=k8s.io Feb 9 09:57:00.518194 env[1209]: time="2024-02-09T09:57:00.517878190Z" level=info msg="cleaning up dead shim" Feb 9 09:57:00.524969 env[1209]: time="2024-02-09T09:57:00.524916591Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:57:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2629 runtime=io.containerd.runc.v2\n" Feb 9 09:57:01.146893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033-rootfs.mount: Deactivated successfully. Feb 9 09:57:01.355680 env[1209]: time="2024-02-09T09:57:01.355626473Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:01.357475 env[1209]: time="2024-02-09T09:57:01.357435934Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:01.358804 env[1209]: time="2024-02-09T09:57:01.358768640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:01.359238 env[1209]: time="2024-02-09T09:57:01.359211956Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:57:01.362907 env[1209]: time="2024-02-09T09:57:01.362869877Z" level=info msg="CreateContainer within sandbox \"819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:57:01.372014 env[1209]: time="2024-02-09T09:57:01.371859622Z" level=info msg="CreateContainer within sandbox \"819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\"" Feb 9 09:57:01.374316 env[1209]: time="2024-02-09T09:57:01.374267356Z" level=info msg="StartContainer for \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\"" Feb 9 09:57:01.414094 kubelet[2125]: E0209 09:57:01.413996 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:01.425127 env[1209]: time="2024-02-09T09:57:01.425081859Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:57:01.444279 env[1209]: time="2024-02-09T09:57:01.444224977Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\"" Feb 9 09:57:01.444846 env[1209]: time="2024-02-09T09:57:01.444807611Z" level=info msg="StartContainer for \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\"" Feb 9 09:57:01.468781 env[1209]: time="2024-02-09T09:57:01.468739918Z" level=info msg="StartContainer for \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\" returns successfully" Feb 9 09:57:01.572703 env[1209]: time="2024-02-09T09:57:01.572648979Z" level=info msg="StartContainer for \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\" returns successfully" Feb 9 09:57:01.604782 env[1209]: time="2024-02-09T09:57:01.604726360Z" level=info msg="shim disconnected" id=96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938 Feb 9 09:57:01.604782 env[1209]: time="2024-02-09T09:57:01.604773720Z" level=warning msg="cleaning up after shim disconnected" id=96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938 namespace=k8s.io Feb 9 09:57:01.604782 env[1209]: time="2024-02-09T09:57:01.604783640Z" level=info msg="cleaning up dead shim" Feb 9 09:57:01.615110 env[1209]: time="2024-02-09T09:57:01.615058251Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:57:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2725 runtime=io.containerd.runc.v2\n" Feb 9 09:57:02.146948 systemd[1]: run-containerd-runc-k8s.io-06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2-runc.Now5cR.mount: Deactivated successfully. Feb 9 09:57:02.417048 kubelet[2125]: E0209 09:57:02.416669 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:02.418289 kubelet[2125]: E0209 09:57:02.418255 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:02.421078 env[1209]: time="2024-02-09T09:57:02.421038488Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:57:02.435867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234134450.mount: Deactivated successfully. Feb 9 09:57:02.441053 env[1209]: time="2024-02-09T09:57:02.441009090Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\"" Feb 9 09:57:02.441868 env[1209]: time="2024-02-09T09:57:02.441815002Z" level=info msg="StartContainer for \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\"" Feb 9 09:57:02.445636 kubelet[2125]: I0209 09:57:02.445600 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-zvgp5" podStartSLOduration=-9.223372027409214e+09 pod.CreationTimestamp="2024-02-09 09:56:53 +0000 UTC" firstStartedPulling="2024-02-09 09:56:54.870113505 +0000 UTC m=+15.636395225" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:02.424999449 +0000 UTC m=+23.191281209" watchObservedRunningTime="2024-02-09 09:57:02.445561005 +0000 UTC m=+23.211842765" Feb 9 09:57:02.522966 env[1209]: time="2024-02-09T09:57:02.522920038Z" level=info msg="StartContainer for \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\" returns successfully" Feb 9 09:57:02.540873 env[1209]: time="2024-02-09T09:57:02.540829421Z" level=info msg="shim disconnected" id=45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720 Feb 9 09:57:02.540873 env[1209]: time="2024-02-09T09:57:02.540871420Z" level=warning msg="cleaning up after shim disconnected" id=45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720 namespace=k8s.io Feb 9 09:57:02.541122 env[1209]: time="2024-02-09T09:57:02.540882460Z" level=info msg="cleaning up dead shim" Feb 9 09:57:02.548690 env[1209]: time="2024-02-09T09:57:02.548654063Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:57:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2780 runtime=io.containerd.runc.v2\n" Feb 9 09:57:03.147044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720-rootfs.mount: Deactivated successfully. Feb 9 09:57:03.422724 kubelet[2125]: E0209 09:57:03.422628 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:03.423225 kubelet[2125]: E0209 09:57:03.423194 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:03.425529 env[1209]: time="2024-02-09T09:57:03.425488756Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:57:03.440277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882947304.mount: Deactivated successfully. Feb 9 09:57:03.445468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941755166.mount: Deactivated successfully. Feb 9 09:57:03.447361 env[1209]: time="2024-02-09T09:57:03.447318633Z" level=info msg="CreateContainer within sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\"" Feb 9 09:57:03.448040 env[1209]: time="2024-02-09T09:57:03.447991107Z" level=info msg="StartContainer for \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\"" Feb 9 09:57:03.512168 env[1209]: time="2024-02-09T09:57:03.512113791Z" level=info msg="StartContainer for \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\" returns successfully" Feb 9 09:57:03.619171 kubelet[2125]: I0209 09:57:03.619132 2125 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:57:03.639722 kubelet[2125]: I0209 09:57:03.639681 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:57:03.640972 kubelet[2125]: I0209 09:57:03.640942 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:57:03.753013 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:57:03.792656 kubelet[2125]: I0209 09:57:03.792621 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87djm\" (UniqueName: \"kubernetes.io/projected/e39fb0bb-3115-4383-bdc3-964aca3798c5-kube-api-access-87djm\") pod \"coredns-787d4945fb-wkv6w\" (UID: \"e39fb0bb-3115-4383-bdc3-964aca3798c5\") " pod="kube-system/coredns-787d4945fb-wkv6w" Feb 9 09:57:03.792834 kubelet[2125]: I0209 09:57:03.792664 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b947b\" (UniqueName: \"kubernetes.io/projected/385d6ef2-4c82-4dca-ae2d-67b0cf959c57-kube-api-access-b947b\") pod \"coredns-787d4945fb-wm2bj\" (UID: \"385d6ef2-4c82-4dca-ae2d-67b0cf959c57\") " pod="kube-system/coredns-787d4945fb-wm2bj" Feb 9 09:57:03.792834 kubelet[2125]: I0209 09:57:03.792700 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e39fb0bb-3115-4383-bdc3-964aca3798c5-config-volume\") pod \"coredns-787d4945fb-wkv6w\" (UID: \"e39fb0bb-3115-4383-bdc3-964aca3798c5\") " pod="kube-system/coredns-787d4945fb-wkv6w" Feb 9 09:57:03.792834 kubelet[2125]: I0209 09:57:03.792729 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/385d6ef2-4c82-4dca-ae2d-67b0cf959c57-config-volume\") pod \"coredns-787d4945fb-wm2bj\" (UID: \"385d6ef2-4c82-4dca-ae2d-67b0cf959c57\") " pod="kube-system/coredns-787d4945fb-wm2bj" Feb 9 09:57:03.942293 kubelet[2125]: E0209 09:57:03.942263 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:03.943136 env[1209]: time="2024-02-09T09:57:03.943087827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wkv6w,Uid:e39fb0bb-3115-4383-bdc3-964aca3798c5,Namespace:kube-system,Attempt:0,}" Feb 9 09:57:03.945448 kubelet[2125]: E0209 09:57:03.945379 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:03.946491 env[1209]: time="2024-02-09T09:57:03.946446236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wm2bj,Uid:385d6ef2-4c82-4dca-ae2d-67b0cf959c57,Namespace:kube-system,Attempt:0,}" Feb 9 09:57:04.039015 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:57:04.427103 kubelet[2125]: E0209 09:57:04.427063 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:05.428739 kubelet[2125]: E0209 09:57:05.428710 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:05.653012 systemd-networkd[1095]: cilium_host: Link UP Feb 9 09:57:05.653135 systemd-networkd[1095]: cilium_net: Link UP Feb 9 09:57:05.654597 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:57:05.654661 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:57:05.655148 systemd-networkd[1095]: cilium_net: Gained carrier Feb 9 09:57:05.655417 systemd-networkd[1095]: cilium_host: Gained carrier Feb 9 09:57:05.733632 systemd-networkd[1095]: cilium_vxlan: Link UP Feb 9 09:57:05.733638 systemd-networkd[1095]: cilium_vxlan: Gained carrier Feb 9 09:57:05.836096 systemd-networkd[1095]: cilium_net: Gained IPv6LL Feb 9 09:57:06.034014 kernel: NET: Registered PF_ALG protocol family Feb 9 09:57:06.053102 systemd-networkd[1095]: cilium_host: Gained IPv6LL Feb 9 09:57:06.430029 kubelet[2125]: E0209 09:57:06.429924 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:06.621318 systemd-networkd[1095]: lxc_health: Link UP Feb 9 09:57:06.630012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:57:06.633567 systemd-networkd[1095]: lxc_health: Gained carrier Feb 9 09:57:07.032087 systemd-networkd[1095]: lxc551c864c1ef0: Link UP Feb 9 09:57:07.041031 kernel: eth0: renamed from tmpe17a1 Feb 9 09:57:07.047380 systemd-networkd[1095]: lxcce0995b92618: Link UP Feb 9 09:57:07.055016 kernel: eth0: renamed from tmp0d5f7 Feb 9 09:57:07.064034 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:57:07.064107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc551c864c1ef0: link becomes ready Feb 9 09:57:07.064075 systemd-networkd[1095]: lxc551c864c1ef0: Gained carrier Feb 9 09:57:07.066248 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:57:07.066319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcce0995b92618: link becomes ready Feb 9 09:57:07.066276 systemd-networkd[1095]: lxcce0995b92618: Gained carrier Feb 9 09:57:07.372128 systemd-networkd[1095]: cilium_vxlan: Gained IPv6LL Feb 9 09:57:07.431718 kubelet[2125]: E0209 09:57:07.431673 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:07.884103 systemd-networkd[1095]: lxc_health: Gained IPv6LL Feb 9 09:57:08.396156 systemd-networkd[1095]: lxc551c864c1ef0: Gained IPv6LL Feb 9 09:57:08.410197 kubelet[2125]: I0209 09:57:08.410167 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-z4rgt" podStartSLOduration=-9.223372021444643e+09 pod.CreationTimestamp="2024-02-09 09:56:53 +0000 UTC" firstStartedPulling="2024-02-09 09:56:54.448860381 +0000 UTC m=+15.215142141" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:04.441961808 +0000 UTC m=+25.208243568" watchObservedRunningTime="2024-02-09 09:57:08.410133465 +0000 UTC m=+29.176415225" Feb 9 09:57:08.432937 kubelet[2125]: E0209 09:57:08.432903 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:08.972132 systemd-networkd[1095]: lxcce0995b92618: Gained IPv6LL Feb 9 09:57:09.434615 kubelet[2125]: E0209 09:57:09.434581 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:10.660922 env[1209]: time="2024-02-09T09:57:10.660850919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:10.661336 env[1209]: time="2024-02-09T09:57:10.660929599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:10.661336 env[1209]: time="2024-02-09T09:57:10.660963319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:10.661483 env[1209]: time="2024-02-09T09:57:10.661441036Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e17a1c10acc5961125f97dc13bc0e921aa5155166e6eb0213e1bcd8121fc0b89 pid=3349 runtime=io.containerd.runc.v2 Feb 9 09:57:10.664995 env[1209]: time="2024-02-09T09:57:10.664463338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:10.664995 env[1209]: time="2024-02-09T09:57:10.664506138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:10.664995 env[1209]: time="2024-02-09T09:57:10.664516097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:10.664995 env[1209]: time="2024-02-09T09:57:10.664638057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d5f7604f99ef27329f257e385f0c247c0f59a8e076cce5595c74fee161c758c pid=3358 runtime=io.containerd.runc.v2 Feb 9 09:57:10.718595 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:57:10.724346 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:57:10.742411 env[1209]: time="2024-02-09T09:57:10.742373357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wm2bj,Uid:385d6ef2-4c82-4dca-ae2d-67b0cf959c57,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d5f7604f99ef27329f257e385f0c247c0f59a8e076cce5595c74fee161c758c\"" Feb 9 09:57:10.743852 kubelet[2125]: E0209 09:57:10.743170 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:10.745129 env[1209]: time="2024-02-09T09:57:10.745088981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wkv6w,Uid:e39fb0bb-3115-4383-bdc3-964aca3798c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e17a1c10acc5961125f97dc13bc0e921aa5155166e6eb0213e1bcd8121fc0b89\"" Feb 9 09:57:10.746597 kubelet[2125]: E0209 09:57:10.746472 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:10.749323 env[1209]: time="2024-02-09T09:57:10.749279596Z" level=info msg="CreateContainer within sandbox \"0d5f7604f99ef27329f257e385f0c247c0f59a8e076cce5595c74fee161c758c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:57:10.750462 env[1209]: time="2024-02-09T09:57:10.750406230Z" level=info msg="CreateContainer within sandbox \"e17a1c10acc5961125f97dc13bc0e921aa5155166e6eb0213e1bcd8121fc0b89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:57:10.763157 env[1209]: time="2024-02-09T09:57:10.763107154Z" level=info msg="CreateContainer within sandbox \"e17a1c10acc5961125f97dc13bc0e921aa5155166e6eb0213e1bcd8121fc0b89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea3912de19bdc9dc39b67845a0c733839a13318b01c89dd13edb4da0e9b924b3\"" Feb 9 09:57:10.764906 env[1209]: time="2024-02-09T09:57:10.763534752Z" level=info msg="StartContainer for \"ea3912de19bdc9dc39b67845a0c733839a13318b01c89dd13edb4da0e9b924b3\"" Feb 9 09:57:10.765077 env[1209]: time="2024-02-09T09:57:10.765029503Z" level=info msg="CreateContainer within sandbox \"0d5f7604f99ef27329f257e385f0c247c0f59a8e076cce5595c74fee161c758c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee0d6845b859f28e94a8dccaa2df5a00d2e5d1f3b88f9c6e668b10ea9d686c22\"" Feb 9 09:57:10.765584 env[1209]: time="2024-02-09T09:57:10.765550740Z" level=info msg="StartContainer for \"ee0d6845b859f28e94a8dccaa2df5a00d2e5d1f3b88f9c6e668b10ea9d686c22\"" Feb 9 09:57:10.853978 env[1209]: time="2024-02-09T09:57:10.853934697Z" level=info msg="StartContainer for \"ee0d6845b859f28e94a8dccaa2df5a00d2e5d1f3b88f9c6e668b10ea9d686c22\" returns successfully" Feb 9 09:57:10.854755 env[1209]: time="2024-02-09T09:57:10.854695253Z" level=info msg="StartContainer for \"ea3912de19bdc9dc39b67845a0c733839a13318b01c89dd13edb4da0e9b924b3\" returns successfully" Feb 9 09:57:11.438920 kubelet[2125]: E0209 09:57:11.438832 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:11.442585 kubelet[2125]: E0209 09:57:11.442102 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:11.452116 kubelet[2125]: I0209 09:57:11.451885 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-wm2bj" podStartSLOduration=18.451855048 pod.CreationTimestamp="2024-02-09 09:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:11.450009298 +0000 UTC m=+32.216291018" watchObservedRunningTime="2024-02-09 09:57:11.451855048 +0000 UTC m=+32.218136768" Feb 9 09:57:11.469133 kubelet[2125]: I0209 09:57:11.469103 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-wkv6w" podStartSLOduration=18.469055313 pod.CreationTimestamp="2024-02-09 09:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:11.468728274 +0000 UTC m=+32.235010074" watchObservedRunningTime="2024-02-09 09:57:11.469055313 +0000 UTC m=+32.235337073" Feb 9 09:57:12.444171 kubelet[2125]: E0209 09:57:12.444141 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:12.444557 kubelet[2125]: E0209 09:57:12.444534 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:13.445810 kubelet[2125]: E0209 09:57:13.445772 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:13.446518 kubelet[2125]: E0209 09:57:13.446500 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:18.965700 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:60448.service. Feb 9 09:57:19.003167 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 60448 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:19.004762 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:19.008614 systemd-logind[1197]: New session 6 of user core. Feb 9 09:57:19.009097 systemd[1]: Started session-6.scope. Feb 9 09:57:19.151459 sshd[3554]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:19.154110 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:60448.service: Deactivated successfully. Feb 9 09:57:19.155103 systemd-logind[1197]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:57:19.155152 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:57:19.156004 systemd-logind[1197]: Removed session 6. Feb 9 09:57:24.154458 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:48616.service. Feb 9 09:57:24.188033 sshd[3570]: Accepted publickey for core from 10.0.0.1 port 48616 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:24.189583 sshd[3570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:24.192948 systemd-logind[1197]: New session 7 of user core. Feb 9 09:57:24.193847 systemd[1]: Started session-7.scope. Feb 9 09:57:24.303196 sshd[3570]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:24.305370 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:48616.service: Deactivated successfully. Feb 9 09:57:24.306375 systemd-logind[1197]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:57:24.306431 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:57:24.307176 systemd-logind[1197]: Removed session 7. Feb 9 09:57:29.306536 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:48624.service. Feb 9 09:57:29.342752 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 48624 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:29.344197 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:29.347416 systemd-logind[1197]: New session 8 of user core. Feb 9 09:57:29.348251 systemd[1]: Started session-8.scope. Feb 9 09:57:29.463277 sshd[3587]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:29.466148 systemd-logind[1197]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:57:29.466346 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:48624.service: Deactivated successfully. Feb 9 09:57:29.467191 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:57:29.467605 systemd-logind[1197]: Removed session 8. Feb 9 09:57:34.462942 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:52020.service. Feb 9 09:57:34.496291 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 52020 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:34.497487 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:34.501182 systemd-logind[1197]: New session 9 of user core. Feb 9 09:57:34.501566 systemd[1]: Started session-9.scope. Feb 9 09:57:34.609851 sshd[3603]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:34.612434 systemd-logind[1197]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:57:34.612515 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:52020.service: Deactivated successfully. Feb 9 09:57:34.613418 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:57:34.613850 systemd-logind[1197]: Removed session 9. Feb 9 09:57:39.612844 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:52024.service. Feb 9 09:57:39.659884 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 52024 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:39.660968 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:39.665264 systemd-logind[1197]: New session 10 of user core. Feb 9 09:57:39.665684 systemd[1]: Started session-10.scope. Feb 9 09:57:39.834947 sshd[3620]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:39.835926 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:52034.service. Feb 9 09:57:39.838289 systemd-logind[1197]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:57:39.838467 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:52024.service: Deactivated successfully. Feb 9 09:57:39.839450 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:57:39.839923 systemd-logind[1197]: Removed session 10. Feb 9 09:57:39.871463 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 52034 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:39.872631 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:39.876012 systemd-logind[1197]: New session 11 of user core. Feb 9 09:57:39.876846 systemd[1]: Started session-11.scope. Feb 9 09:57:40.713061 sshd[3633]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:40.713372 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:52048.service. Feb 9 09:57:40.726070 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:52034.service: Deactivated successfully. Feb 9 09:57:40.728141 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:57:40.728848 systemd-logind[1197]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:57:40.730576 systemd-logind[1197]: Removed session 11. Feb 9 09:57:40.758656 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 52048 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:40.760029 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:40.763664 systemd-logind[1197]: New session 12 of user core. Feb 9 09:57:40.764504 systemd[1]: Started session-12.scope. Feb 9 09:57:40.880732 sshd[3646]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:40.883394 systemd-logind[1197]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:57:40.883600 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:52048.service: Deactivated successfully. Feb 9 09:57:40.884418 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:57:40.884844 systemd-logind[1197]: Removed session 12. Feb 9 09:57:45.884226 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:45840.service. Feb 9 09:57:45.918842 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 45840 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:45.920498 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:45.924620 systemd-logind[1197]: New session 13 of user core. Feb 9 09:57:45.925097 systemd[1]: Started session-13.scope. Feb 9 09:57:46.044964 sshd[3662]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:46.047518 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:45840.service: Deactivated successfully. Feb 9 09:57:46.048346 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:57:46.052194 systemd-logind[1197]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:57:46.052936 systemd-logind[1197]: Removed session 13. Feb 9 09:57:51.047954 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:45846.service. Feb 9 09:57:51.119211 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 45846 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:51.120396 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:51.124600 systemd-logind[1197]: New session 14 of user core. Feb 9 09:57:51.124609 systemd[1]: Started session-14.scope. Feb 9 09:57:51.236178 sshd[3676]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:51.239823 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:45858.service. Feb 9 09:57:51.240308 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:45846.service: Deactivated successfully. Feb 9 09:57:51.242969 systemd-logind[1197]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:57:51.243221 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:57:51.244303 systemd-logind[1197]: Removed session 14. Feb 9 09:57:51.273316 sshd[3688]: Accepted publickey for core from 10.0.0.1 port 45858 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:51.274494 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:51.279031 systemd[1]: Started session-15.scope. Feb 9 09:57:51.279418 systemd-logind[1197]: New session 15 of user core. Feb 9 09:57:51.525966 sshd[3688]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:51.528277 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:45872.service. Feb 9 09:57:51.529459 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:45858.service: Deactivated successfully. Feb 9 09:57:51.530390 systemd-logind[1197]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:57:51.530434 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:57:51.531318 systemd-logind[1197]: Removed session 15. Feb 9 09:57:51.567060 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 45872 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:51.568308 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:51.572438 systemd-logind[1197]: New session 16 of user core. Feb 9 09:57:51.572510 systemd[1]: Started session-16.scope. Feb 9 09:57:52.280600 sshd[3700]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:52.281430 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:45878.service. Feb 9 09:57:52.283156 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:45872.service: Deactivated successfully. Feb 9 09:57:52.284258 systemd-logind[1197]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:57:52.284343 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:57:52.285573 systemd-logind[1197]: Removed session 16. Feb 9 09:57:52.317872 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 45878 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:52.319573 sshd[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:52.322971 systemd-logind[1197]: New session 17 of user core. Feb 9 09:57:52.323771 systemd[1]: Started session-17.scope. Feb 9 09:57:52.524372 sshd[3727]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:52.527025 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:45884.service. Feb 9 09:57:52.530247 systemd-logind[1197]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:57:52.530348 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:45878.service: Deactivated successfully. Feb 9 09:57:52.531212 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:57:52.532950 systemd-logind[1197]: Removed session 17. Feb 9 09:57:52.563249 sshd[3782]: Accepted publickey for core from 10.0.0.1 port 45884 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:52.564784 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:52.568567 systemd-logind[1197]: New session 18 of user core. Feb 9 09:57:52.568969 systemd[1]: Started session-18.scope. Feb 9 09:57:52.681820 sshd[3782]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:52.684205 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:45884.service: Deactivated successfully. Feb 9 09:57:52.685127 systemd-logind[1197]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:57:52.685183 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:57:52.685897 systemd-logind[1197]: Removed session 18. Feb 9 09:57:57.684867 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:37084.service. Feb 9 09:57:57.718378 sshd[3828]: Accepted publickey for core from 10.0.0.1 port 37084 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:57.719611 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:57.723088 systemd-logind[1197]: New session 19 of user core. Feb 9 09:57:57.723928 systemd[1]: Started session-19.scope. Feb 9 09:57:57.826786 sshd[3828]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:57.829143 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:37084.service: Deactivated successfully. Feb 9 09:57:57.830098 systemd-logind[1197]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:57:57.830157 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:57:57.830849 systemd-logind[1197]: Removed session 19. Feb 9 09:58:00.371304 kubelet[2125]: E0209 09:58:00.371270 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:02.829965 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:33398.service. Feb 9 09:58:02.863396 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 33398 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:58:02.864584 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:02.867892 systemd-logind[1197]: New session 20 of user core. Feb 9 09:58:02.868783 systemd[1]: Started session-20.scope. Feb 9 09:58:02.973415 sshd[3842]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:02.975814 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:33398.service: Deactivated successfully. Feb 9 09:58:02.976760 systemd-logind[1197]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:58:02.976800 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:58:02.977537 systemd-logind[1197]: Removed session 20. Feb 9 09:58:06.371776 kubelet[2125]: E0209 09:58:06.371737 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:07.976836 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:33412.service. Feb 9 09:58:08.010524 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 33412 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:58:08.011627 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:08.015084 systemd-logind[1197]: New session 21 of user core. Feb 9 09:58:08.015945 systemd[1]: Started session-21.scope. Feb 9 09:58:08.121595 sshd[3856]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:08.124668 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:33412.service: Deactivated successfully. Feb 9 09:58:08.125492 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:58:08.129263 systemd-logind[1197]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:58:08.130129 systemd-logind[1197]: Removed session 21. Feb 9 09:58:11.372922 kubelet[2125]: E0209 09:58:11.372890 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:13.126650 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:49532.service. Feb 9 09:58:13.165914 sshd[3870]: Accepted publickey for core from 10.0.0.1 port 49532 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:58:13.166827 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:13.170370 systemd-logind[1197]: New session 22 of user core. Feb 9 09:58:13.171706 systemd[1]: Started session-22.scope. Feb 9 09:58:13.300421 sshd[3870]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:13.302724 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:49546.service. Feb 9 09:58:13.314908 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:49532.service: Deactivated successfully. Feb 9 09:58:13.319059 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:58:13.319754 systemd-logind[1197]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:58:13.320853 systemd-logind[1197]: Removed session 22. Feb 9 09:58:13.343039 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 49546 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:58:13.344217 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:13.347863 systemd-logind[1197]: New session 23 of user core. Feb 9 09:58:13.348516 systemd[1]: Started session-23.scope. Feb 9 09:58:14.371009 kubelet[2125]: E0209 09:58:14.370957 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:14.984552 env[1209]: time="2024-02-09T09:58:14.984504385Z" level=info msg="StopContainer for \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\" with timeout 30 (s)" Feb 9 09:58:14.985609 env[1209]: time="2024-02-09T09:58:14.985068387Z" level=info msg="Stop container \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\" with signal terminated" Feb 9 09:58:14.995602 systemd[1]: run-containerd-runc-k8s.io-16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe-runc.Ye6dPe.mount: Deactivated successfully. Feb 9 09:58:15.018039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2-rootfs.mount: Deactivated successfully. Feb 9 09:58:15.022058 env[1209]: time="2024-02-09T09:58:15.021943069Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:58:15.025028 env[1209]: time="2024-02-09T09:58:15.024934199Z" level=info msg="shim disconnected" id=06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2 Feb 9 09:58:15.025028 env[1209]: time="2024-02-09T09:58:15.024969439Z" level=warning msg="cleaning up after shim disconnected" id=06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2 namespace=k8s.io Feb 9 09:58:15.025028 env[1209]: time="2024-02-09T09:58:15.024978519Z" level=info msg="cleaning up dead shim" Feb 9 09:58:15.027410 env[1209]: time="2024-02-09T09:58:15.027379447Z" level=info msg="StopContainer for \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\" with timeout 1 (s)" Feb 9 09:58:15.027841 env[1209]: time="2024-02-09T09:58:15.027817889Z" level=info msg="Stop container \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\" with signal terminated" Feb 9 09:58:15.033455 systemd-networkd[1095]: lxc_health: Link DOWN Feb 9 09:58:15.033460 systemd-networkd[1095]: lxc_health: Lost carrier Feb 9 09:58:15.034370 env[1209]: time="2024-02-09T09:58:15.034208510Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3934 runtime=io.containerd.runc.v2\n" Feb 9 09:58:15.036423 env[1209]: time="2024-02-09T09:58:15.036386517Z" level=info msg="StopContainer for \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\" returns successfully" Feb 9 09:58:15.036960 env[1209]: time="2024-02-09T09:58:15.036929679Z" level=info msg="StopPodSandbox for \"819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0\"" Feb 9 09:58:15.037109 env[1209]: time="2024-02-09T09:58:15.037052279Z" level=info msg="Container to stop \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.038664 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0-shm.mount: Deactivated successfully. Feb 9 09:58:15.063864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0-rootfs.mount: Deactivated successfully. Feb 9 09:58:15.069345 env[1209]: time="2024-02-09T09:58:15.069297945Z" level=info msg="shim disconnected" id=819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0 Feb 9 09:58:15.069345 env[1209]: time="2024-02-09T09:58:15.069345425Z" level=warning msg="cleaning up after shim disconnected" id=819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0 namespace=k8s.io Feb 9 09:58:15.069539 env[1209]: time="2024-02-09T09:58:15.069355705Z" level=info msg="cleaning up dead shim" Feb 9 09:58:15.077044 env[1209]: time="2024-02-09T09:58:15.076978170Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3983 runtime=io.containerd.runc.v2\n" Feb 9 09:58:15.077705 env[1209]: time="2024-02-09T09:58:15.077628692Z" level=info msg="TearDown network for sandbox \"819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0\" successfully" Feb 9 09:58:15.077764 env[1209]: time="2024-02-09T09:58:15.077705373Z" level=info msg="StopPodSandbox for \"819e7763dbdcfce856d13f2abb8167c22ebefe41547f4dd0d32de65b5c2fe9a0\" returns successfully" Feb 9 09:58:15.084608 env[1209]: time="2024-02-09T09:58:15.084570715Z" level=info msg="shim disconnected" id=16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe Feb 9 09:58:15.084608 env[1209]: time="2024-02-09T09:58:15.084610835Z" level=warning msg="cleaning up after shim disconnected" id=16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe namespace=k8s.io Feb 9 09:58:15.084805 env[1209]: time="2024-02-09T09:58:15.084620555Z" level=info msg="cleaning up dead shim" Feb 9 09:58:15.091870 env[1209]: time="2024-02-09T09:58:15.091828539Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4000 runtime=io.containerd.runc.v2\n" Feb 9 09:58:15.093748 env[1209]: time="2024-02-09T09:58:15.093709265Z" level=info msg="StopContainer for \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\" returns successfully" Feb 9 09:58:15.094249 env[1209]: time="2024-02-09T09:58:15.094190467Z" level=info msg="StopPodSandbox for \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\"" Feb 9 09:58:15.094407 env[1209]: time="2024-02-09T09:58:15.094384867Z" level=info msg="Container to stop \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.094495 env[1209]: time="2024-02-09T09:58:15.094476828Z" level=info msg="Container to stop \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.094565 env[1209]: time="2024-02-09T09:58:15.094549548Z" level=info msg="Container to stop \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.094632 env[1209]: time="2024-02-09T09:58:15.094616428Z" level=info msg="Container to stop \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.094705 env[1209]: time="2024-02-09T09:58:15.094688468Z" level=info msg="Container to stop \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.116177 env[1209]: time="2024-02-09T09:58:15.116121259Z" level=info msg="shim disconnected" id=6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25 Feb 9 09:58:15.116177 env[1209]: time="2024-02-09T09:58:15.116175819Z" level=warning msg="cleaning up after shim disconnected" id=6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25 namespace=k8s.io Feb 9 09:58:15.116359 env[1209]: time="2024-02-09T09:58:15.116186579Z" level=info msg="cleaning up dead shim" Feb 9 09:58:15.123754 env[1209]: time="2024-02-09T09:58:15.123710044Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4033 runtime=io.containerd.runc.v2\n" Feb 9 09:58:15.124076 env[1209]: time="2024-02-09T09:58:15.124049285Z" level=info msg="TearDown network for sandbox \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" successfully" Feb 9 09:58:15.124120 env[1209]: time="2024-02-09T09:58:15.124078645Z" level=info msg="StopPodSandbox for \"6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25\" returns successfully" Feb 9 09:58:15.262541 kubelet[2125]: I0209 09:58:15.262432 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-run\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.262675 kubelet[2125]: I0209 09:58:15.262558 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-host-proc-sys-net\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.262675 kubelet[2125]: I0209 09:58:15.262585 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-cgroup\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.262724 kubelet[2125]: I0209 09:58:15.262610 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36fe4c5c-672b-4a1c-a61e-0f299f1d6041-cilium-config-path\") pod \"36fe4c5c-672b-4a1c-a61e-0f299f1d6041\" (UID: \"36fe4c5c-672b-4a1c-a61e-0f299f1d6041\") " Feb 9 09:58:15.262724 kubelet[2125]: I0209 09:58:15.262722 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbpjd\" (UniqueName: \"kubernetes.io/projected/47d3f462-acfb-483f-b53f-b918c2d86b4a-kube-api-access-lbpjd\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.262775 kubelet[2125]: I0209 09:58:15.262743 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47d3f462-acfb-483f-b53f-b918c2d86b4a-hubble-tls\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263627 kubelet[2125]: I0209 09:58:15.262864 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-hostproc\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263627 kubelet[2125]: I0209 09:58:15.262921 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-config-path\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263627 kubelet[2125]: I0209 09:58:15.262997 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.263627 kubelet[2125]: I0209 09:58:15.263036 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.263627 kubelet[2125]: I0209 09:58:15.263007 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.263809 kubelet[2125]: I0209 09:58:15.263358 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cni-path\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263809 kubelet[2125]: I0209 09:58:15.263498 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-xtables-lock\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263809 kubelet[2125]: I0209 09:58:15.263520 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-host-proc-sys-kernel\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263809 kubelet[2125]: I0209 09:58:15.263538 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-bpf-maps\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263809 kubelet[2125]: I0209 09:58:15.263572 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-lib-modules\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263809 kubelet[2125]: I0209 09:58:15.263595 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s66l5\" (UniqueName: \"kubernetes.io/projected/36fe4c5c-672b-4a1c-a61e-0f299f1d6041-kube-api-access-s66l5\") pod \"36fe4c5c-672b-4a1c-a61e-0f299f1d6041\" (UID: \"36fe4c5c-672b-4a1c-a61e-0f299f1d6041\") " Feb 9 09:58:15.263939 kubelet[2125]: I0209 09:58:15.263613 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-etc-cni-netd\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263939 kubelet[2125]: I0209 09:58:15.263646 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47d3f462-acfb-483f-b53f-b918c2d86b4a-clustermesh-secrets\") pod \"47d3f462-acfb-483f-b53f-b918c2d86b4a\" (UID: \"47d3f462-acfb-483f-b53f-b918c2d86b4a\") " Feb 9 09:58:15.263939 kubelet[2125]: I0209 09:58:15.263683 2125 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.263939 kubelet[2125]: I0209 09:58:15.263694 2125 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.263939 kubelet[2125]: I0209 09:58:15.263704 2125 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.263939 kubelet[2125]: I0209 09:58:15.263848 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.264116 kubelet[2125]: I0209 09:58:15.263875 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-hostproc" (OuterVolumeSpecName: "hostproc") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.266006 kubelet[2125]: W0209 09:58:15.264191 2125 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/36fe4c5c-672b-4a1c-a61e-0f299f1d6041/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:58:15.266006 kubelet[2125]: I0209 09:58:15.264277 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cni-path" (OuterVolumeSpecName: "cni-path") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.266006 kubelet[2125]: I0209 09:58:15.264307 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.266006 kubelet[2125]: I0209 09:58:15.264324 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.266006 kubelet[2125]: I0209 09:58:15.264339 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.266225 kubelet[2125]: I0209 09:58:15.264668 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.266225 kubelet[2125]: W0209 09:58:15.264193 2125 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/47d3f462-acfb-483f-b53f-b918c2d86b4a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:58:15.268051 kubelet[2125]: I0209 09:58:15.266808 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36fe4c5c-672b-4a1c-a61e-0f299f1d6041-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "36fe4c5c-672b-4a1c-a61e-0f299f1d6041" (UID: "36fe4c5c-672b-4a1c-a61e-0f299f1d6041"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:58:15.268051 kubelet[2125]: I0209 09:58:15.266861 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:58:15.268051 kubelet[2125]: I0209 09:58:15.266964 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d3f462-acfb-483f-b53f-b918c2d86b4a-kube-api-access-lbpjd" (OuterVolumeSpecName: "kube-api-access-lbpjd") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "kube-api-access-lbpjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:15.268699 kubelet[2125]: I0209 09:58:15.268672 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d3f462-acfb-483f-b53f-b918c2d86b4a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:15.268798 kubelet[2125]: I0209 09:58:15.268705 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d3f462-acfb-483f-b53f-b918c2d86b4a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "47d3f462-acfb-483f-b53f-b918c2d86b4a" (UID: "47d3f462-acfb-483f-b53f-b918c2d86b4a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:58:15.269140 kubelet[2125]: I0209 09:58:15.269110 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fe4c5c-672b-4a1c-a61e-0f299f1d6041-kube-api-access-s66l5" (OuterVolumeSpecName: "kube-api-access-s66l5") pod "36fe4c5c-672b-4a1c-a61e-0f299f1d6041" (UID: "36fe4c5c-672b-4a1c-a61e-0f299f1d6041"). InnerVolumeSpecName "kube-api-access-s66l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:15.364513 kubelet[2125]: I0209 09:58:15.364465 2125 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47d3f462-acfb-483f-b53f-b918c2d86b4a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364513 kubelet[2125]: I0209 09:58:15.364499 2125 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47d3f462-acfb-483f-b53f-b918c2d86b4a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364513 kubelet[2125]: I0209 09:58:15.364510 2125 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364513 kubelet[2125]: I0209 09:58:15.364520 2125 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364513 kubelet[2125]: I0209 09:58:15.364530 2125 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364818 kubelet[2125]: I0209 09:58:15.364539 2125 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364818 kubelet[2125]: I0209 09:58:15.364548 2125 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364818 kubelet[2125]: I0209 09:58:15.364557 2125 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364818 kubelet[2125]: I0209 09:58:15.364566 2125 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47d3f462-acfb-483f-b53f-b918c2d86b4a-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364818 kubelet[2125]: I0209 09:58:15.364576 2125 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-s66l5\" (UniqueName: \"kubernetes.io/projected/36fe4c5c-672b-4a1c-a61e-0f299f1d6041-kube-api-access-s66l5\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364818 kubelet[2125]: I0209 09:58:15.364585 2125 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47d3f462-acfb-483f-b53f-b918c2d86b4a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364818 kubelet[2125]: I0209 09:58:15.364594 2125 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-lbpjd\" (UniqueName: \"kubernetes.io/projected/47d3f462-acfb-483f-b53f-b918c2d86b4a-kube-api-access-lbpjd\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.364818 kubelet[2125]: I0209 09:58:15.364603 2125 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36fe4c5c-672b-4a1c-a61e-0f299f1d6041-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:15.549184 kubelet[2125]: I0209 09:58:15.549145 2125 scope.go:115] "RemoveContainer" containerID="06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2" Feb 9 09:58:15.550898 env[1209]: time="2024-02-09T09:58:15.550850327Z" level=info msg="RemoveContainer for \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\"" Feb 9 09:58:15.554807 env[1209]: time="2024-02-09T09:58:15.554771620Z" level=info msg="RemoveContainer for \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\" returns successfully" Feb 9 09:58:15.558410 kubelet[2125]: I0209 09:58:15.556253 2125 scope.go:115] "RemoveContainer" containerID="06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2" Feb 9 09:58:15.558410 kubelet[2125]: E0209 09:58:15.556807 2125 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\": not found" containerID="06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2" Feb 9 09:58:15.558410 kubelet[2125]: I0209 09:58:15.556837 2125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2} err="failed to get container status \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\": rpc error: code = NotFound desc = an error occurred when try to find container \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\": not found" Feb 9 09:58:15.558410 kubelet[2125]: I0209 09:58:15.556848 2125 scope.go:115] "RemoveContainer" containerID="16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe" Feb 9 09:58:15.560119 env[1209]: time="2024-02-09T09:58:15.556410866Z" level=error msg="ContainerStatus for \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06851315dc4995f1743d6fb3150eeeaf4f36b82cc41f49cc3dc894e10bf74da2\": not found" Feb 9 09:58:15.560119 env[1209]: time="2024-02-09T09:58:15.557663190Z" level=info msg="RemoveContainer for \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\"" Feb 9 09:58:15.561675 env[1209]: time="2024-02-09T09:58:15.560680480Z" level=info msg="RemoveContainer for \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\" returns successfully" Feb 9 09:58:15.561746 kubelet[2125]: I0209 09:58:15.560838 2125 scope.go:115] "RemoveContainer" containerID="45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720" Feb 9 09:58:15.561781 env[1209]: time="2024-02-09T09:58:15.561719363Z" level=info msg="RemoveContainer for \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\"" Feb 9 09:58:15.565547 env[1209]: time="2024-02-09T09:58:15.565513496Z" level=info msg="RemoveContainer for \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\" returns successfully" Feb 9 09:58:15.567233 kubelet[2125]: I0209 09:58:15.567154 2125 scope.go:115] "RemoveContainer" containerID="96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938" Feb 9 09:58:15.572210 env[1209]: time="2024-02-09T09:58:15.572177398Z" level=info msg="RemoveContainer for \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\"" Feb 9 09:58:15.574455 env[1209]: time="2024-02-09T09:58:15.574422245Z" level=info msg="RemoveContainer for \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\" returns successfully" Feb 9 09:58:15.574569 kubelet[2125]: I0209 09:58:15.574552 2125 scope.go:115] "RemoveContainer" containerID="82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354" Feb 9 09:58:15.575424 env[1209]: time="2024-02-09T09:58:15.575399728Z" level=info msg="RemoveContainer for \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\"" Feb 9 09:58:15.577741 env[1209]: time="2024-02-09T09:58:15.577682536Z" level=info msg="RemoveContainer for \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\" returns successfully" Feb 9 09:58:15.577996 kubelet[2125]: I0209 09:58:15.577957 2125 scope.go:115] "RemoveContainer" containerID="194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033" Feb 9 09:58:15.578870 env[1209]: time="2024-02-09T09:58:15.578845019Z" level=info msg="RemoveContainer for \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\"" Feb 9 09:58:15.582973 env[1209]: time="2024-02-09T09:58:15.582941633Z" level=info msg="RemoveContainer for \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\" returns successfully" Feb 9 09:58:15.583241 kubelet[2125]: I0209 09:58:15.583217 2125 scope.go:115] "RemoveContainer" containerID="16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe" Feb 9 09:58:15.583564 env[1209]: time="2024-02-09T09:58:15.583506035Z" level=error msg="ContainerStatus for \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\": not found" Feb 9 09:58:15.583777 kubelet[2125]: E0209 09:58:15.583679 2125 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\": not found" containerID="16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe" Feb 9 09:58:15.583777 kubelet[2125]: I0209 09:58:15.583727 2125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe} err="failed to get container status \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe\": not found" Feb 9 09:58:15.583777 kubelet[2125]: I0209 09:58:15.583738 2125 scope.go:115] "RemoveContainer" containerID="45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720" Feb 9 09:58:15.584326 env[1209]: time="2024-02-09T09:58:15.584272277Z" level=error msg="ContainerStatus for \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\": not found" Feb 9 09:58:15.584529 kubelet[2125]: E0209 09:58:15.584496 2125 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\": not found" containerID="45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720" Feb 9 09:58:15.584573 kubelet[2125]: I0209 09:58:15.584545 2125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720} err="failed to get container status \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\": rpc error: code = NotFound desc = an error occurred when try to find container \"45a8ae16b750628741d65df632404493bfdfab7b0504936fb18fe655d618c720\": not found" Feb 9 09:58:15.584573 kubelet[2125]: I0209 09:58:15.584558 2125 scope.go:115] "RemoveContainer" containerID="96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938" Feb 9 09:58:15.584848 env[1209]: time="2024-02-09T09:58:15.584763719Z" level=error msg="ContainerStatus for \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\": not found" Feb 9 09:58:15.585171 kubelet[2125]: E0209 09:58:15.585153 2125 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\": not found" containerID="96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938" Feb 9 09:58:15.585283 kubelet[2125]: I0209 09:58:15.585180 2125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938} err="failed to get container status \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\": rpc error: code = NotFound desc = an error occurred when try to find container \"96bbae166244bbc16d6ecf3933d2bc68667c793fae96b0180db0f26c90bfe938\": not found" Feb 9 09:58:15.585283 kubelet[2125]: I0209 09:58:15.585190 2125 scope.go:115] "RemoveContainer" containerID="82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354" Feb 9 09:58:15.585283 kubelet[2125]: E0209 09:58:15.585525 2125 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\": not found" containerID="82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354" Feb 9 09:58:15.585283 kubelet[2125]: I0209 09:58:15.585552 2125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354} err="failed to get container status \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\": rpc error: code = NotFound desc = an error occurred when try to find container \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\": not found" Feb 9 09:58:15.585283 kubelet[2125]: I0209 09:58:15.585561 2125 scope.go:115] "RemoveContainer" containerID="194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033" Feb 9 09:58:15.585283 kubelet[2125]: E0209 09:58:15.585903 2125 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\": not found" containerID="194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033" Feb 9 09:58:15.611295 kubelet[2125]: I0209 09:58:15.585944 2125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033} err="failed to get container status \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\": rpc error: code = NotFound desc = an error occurred when try to find container \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\": not found" Feb 9 09:58:15.611329 env[1209]: time="2024-02-09T09:58:15.585358761Z" level=error msg="ContainerStatus for \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82c667f1420225cd8afa7781bad7a798f4247b43e2e21e44b0232b9114de5354\": not found" Feb 9 09:58:15.611329 env[1209]: time="2024-02-09T09:58:15.585762322Z" level=error msg="ContainerStatus for \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"194649c85017a8ad81f086614c7919ba8f8d364bc12995639f0b326077358033\": not found" Feb 9 09:58:15.990848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16fab016f5e06dbecfca6bbf7d76463191645da5d9516576c4e8336be35974fe-rootfs.mount: Deactivated successfully. Feb 9 09:58:15.991027 systemd[1]: var-lib-kubelet-pods-36fe4c5c\x2d672b\x2d4a1c\x2da61e\x2d0f299f1d6041-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds66l5.mount: Deactivated successfully. Feb 9 09:58:15.991139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25-rootfs.mount: Deactivated successfully. Feb 9 09:58:15.991239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d8b58d6f87a7460901ad75fb309a4e4636becccf8a4248cb4f55e620d897c25-shm.mount: Deactivated successfully. Feb 9 09:58:15.991324 systemd[1]: var-lib-kubelet-pods-47d3f462\x2dacfb\x2d483f\x2db53f\x2db918c2d86b4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlbpjd.mount: Deactivated successfully. Feb 9 09:58:15.991409 systemd[1]: var-lib-kubelet-pods-47d3f462\x2dacfb\x2d483f\x2db53f\x2db918c2d86b4a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:58:15.991488 systemd[1]: var-lib-kubelet-pods-47d3f462\x2dacfb\x2d483f\x2db53f\x2db918c2d86b4a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:58:16.936612 sshd[3883]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:16.939134 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:49552.service. Feb 9 09:58:16.939681 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:49546.service: Deactivated successfully. Feb 9 09:58:16.940835 systemd-logind[1197]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:58:16.940903 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:58:16.941894 systemd-logind[1197]: Removed session 23. Feb 9 09:58:16.976995 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 49552 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:58:16.978692 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:16.982096 systemd-logind[1197]: New session 24 of user core. Feb 9 09:58:16.982952 systemd[1]: Started session-24.scope. Feb 9 09:58:17.372877 kubelet[2125]: I0209 09:58:17.372842 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=36fe4c5c-672b-4a1c-a61e-0f299f1d6041 path="/var/lib/kubelet/pods/36fe4c5c-672b-4a1c-a61e-0f299f1d6041/volumes" Feb 9 09:58:17.373323 kubelet[2125]: I0209 09:58:17.373294 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=47d3f462-acfb-483f-b53f-b918c2d86b4a path="/var/lib/kubelet/pods/47d3f462-acfb-483f-b53f-b918c2d86b4a/volumes" Feb 9 09:58:17.696684 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:49566.service. Feb 9 09:58:17.706267 sshd[4050]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:17.711161 kubelet[2125]: I0209 09:58:17.707101 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:17.711161 kubelet[2125]: E0209 09:58:17.707150 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47d3f462-acfb-483f-b53f-b918c2d86b4a" containerName="mount-bpf-fs" Feb 9 09:58:17.711161 kubelet[2125]: E0209 09:58:17.707159 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47d3f462-acfb-483f-b53f-b918c2d86b4a" containerName="cilium-agent" Feb 9 09:58:17.711161 kubelet[2125]: E0209 09:58:17.707166 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47d3f462-acfb-483f-b53f-b918c2d86b4a" containerName="mount-cgroup" Feb 9 09:58:17.711161 kubelet[2125]: E0209 09:58:17.707172 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47d3f462-acfb-483f-b53f-b918c2d86b4a" containerName="apply-sysctl-overwrites" Feb 9 09:58:17.711161 kubelet[2125]: E0209 09:58:17.707178 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36fe4c5c-672b-4a1c-a61e-0f299f1d6041" containerName="cilium-operator" Feb 9 09:58:17.711161 kubelet[2125]: E0209 09:58:17.707185 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47d3f462-acfb-483f-b53f-b918c2d86b4a" containerName="clean-cilium-state" Feb 9 09:58:17.711161 kubelet[2125]: I0209 09:58:17.708217 2125 memory_manager.go:346] "RemoveStaleState removing state" podUID="47d3f462-acfb-483f-b53f-b918c2d86b4a" containerName="cilium-agent" Feb 9 09:58:17.711161 kubelet[2125]: I0209 09:58:17.708242 2125 memory_manager.go:346] "RemoveStaleState removing state" podUID="36fe4c5c-672b-4a1c-a61e-0f299f1d6041" containerName="cilium-operator" Feb 9 09:58:17.733673 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:49552.service: Deactivated successfully. Feb 9 09:58:17.735696 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:58:17.736314 systemd-logind[1197]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:58:17.737424 systemd-logind[1197]: Removed session 24. Feb 9 09:58:17.755842 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 49566 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:58:17.757113 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:17.761354 systemd[1]: Started session-25.scope. Feb 9 09:58:17.761635 systemd-logind[1197]: New session 25 of user core. Feb 9 09:58:17.878111 kubelet[2125]: I0209 09:58:17.878012 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-run\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878111 kubelet[2125]: I0209 09:58:17.878072 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-bpf-maps\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878304 kubelet[2125]: I0209 09:58:17.878133 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq4th\" (UniqueName: \"kubernetes.io/projected/7fe61996-f77d-4a14-9f33-f18bd2641718-kube-api-access-wq4th\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878304 kubelet[2125]: I0209 09:58:17.878192 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-cgroup\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878304 kubelet[2125]: I0209 09:58:17.878225 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cni-path\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878304 kubelet[2125]: I0209 09:58:17.878264 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fe61996-f77d-4a14-9f33-f18bd2641718-clustermesh-secrets\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878304 kubelet[2125]: I0209 09:58:17.878303 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-host-proc-sys-net\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878492 kubelet[2125]: I0209 09:58:17.878335 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-etc-cni-netd\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878492 kubelet[2125]: I0209 09:58:17.878360 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-lib-modules\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878492 kubelet[2125]: I0209 09:58:17.878390 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-host-proc-sys-kernel\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878492 kubelet[2125]: I0209 09:58:17.878414 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-hostproc\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878492 kubelet[2125]: I0209 09:58:17.878435 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-config-path\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878492 kubelet[2125]: I0209 09:58:17.878455 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-ipsec-secrets\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878622 kubelet[2125]: I0209 09:58:17.878474 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fe61996-f77d-4a14-9f33-f18bd2641718-hubble-tls\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.878622 kubelet[2125]: I0209 09:58:17.878498 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-xtables-lock\") pod \"cilium-5qk7v\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " pod="kube-system/cilium-5qk7v" Feb 9 09:58:17.884196 sshd[4064]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:17.885245 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:49576.service. Feb 9 09:58:17.893725 systemd-logind[1197]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:58:17.894393 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:49566.service: Deactivated successfully. Feb 9 09:58:17.896176 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:58:17.897227 systemd-logind[1197]: Removed session 25. Feb 9 09:58:17.923793 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 49576 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:58:17.927805 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:17.932730 systemd[1]: Started session-26.scope. Feb 9 09:58:17.933052 systemd-logind[1197]: New session 26 of user core. Feb 9 09:58:18.035708 kubelet[2125]: E0209 09:58:18.035668 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:18.036626 env[1209]: time="2024-02-09T09:58:18.036206415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5qk7v,Uid:7fe61996-f77d-4a14-9f33-f18bd2641718,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:18.051474 env[1209]: time="2024-02-09T09:58:18.051397142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:18.051597 env[1209]: time="2024-02-09T09:58:18.051483382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:18.051597 env[1209]: time="2024-02-09T09:58:18.051510222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:18.051759 env[1209]: time="2024-02-09T09:58:18.051716743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488 pid=4103 runtime=io.containerd.runc.v2 Feb 9 09:58:18.108901 env[1209]: time="2024-02-09T09:58:18.108859597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5qk7v,Uid:7fe61996-f77d-4a14-9f33-f18bd2641718,Namespace:kube-system,Attempt:0,} returns sandbox id \"452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488\"" Feb 9 09:58:18.109527 kubelet[2125]: E0209 09:58:18.109507 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:18.112844 env[1209]: time="2024-02-09T09:58:18.112798769Z" level=info msg="CreateContainer within sandbox \"452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:58:18.123111 env[1209]: time="2024-02-09T09:58:18.122962401Z" level=info msg="CreateContainer within sandbox \"452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e\"" Feb 9 09:58:18.123920 env[1209]: time="2024-02-09T09:58:18.123869323Z" level=info msg="StartContainer for \"f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e\"" Feb 9 09:58:18.178511 env[1209]: time="2024-02-09T09:58:18.178463730Z" level=info msg="StartContainer for \"f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e\" returns successfully" Feb 9 09:58:18.211661 env[1209]: time="2024-02-09T09:58:18.211616752Z" level=info msg="shim disconnected" id=f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e Feb 9 09:58:18.211906 env[1209]: time="2024-02-09T09:58:18.211885073Z" level=warning msg="cleaning up after shim disconnected" id=f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e namespace=k8s.io Feb 9 09:58:18.211997 env[1209]: time="2024-02-09T09:58:18.211974033Z" level=info msg="cleaning up dead shim" Feb 9 09:58:18.219106 env[1209]: time="2024-02-09T09:58:18.219070735Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4185 runtime=io.containerd.runc.v2\n" Feb 9 09:58:18.561390 env[1209]: time="2024-02-09T09:58:18.561344062Z" level=info msg="StopPodSandbox for \"452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488\"" Feb 9 09:58:18.561539 env[1209]: time="2024-02-09T09:58:18.561402942Z" level=info msg="Container to stop \"f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:18.589315 env[1209]: time="2024-02-09T09:58:18.589005186Z" level=info msg="shim disconnected" id=452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488 Feb 9 09:58:18.589315 env[1209]: time="2024-02-09T09:58:18.589059026Z" level=warning msg="cleaning up after shim disconnected" id=452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488 namespace=k8s.io Feb 9 09:58:18.589315 env[1209]: time="2024-02-09T09:58:18.589070426Z" level=info msg="cleaning up dead shim" Feb 9 09:58:18.596591 env[1209]: time="2024-02-09T09:58:18.596552729Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4217 runtime=io.containerd.runc.v2\n" Feb 9 09:58:18.596882 env[1209]: time="2024-02-09T09:58:18.596858210Z" level=info msg="TearDown network for sandbox \"452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488\" successfully" Feb 9 09:58:18.596924 env[1209]: time="2024-02-09T09:58:18.596884810Z" level=info msg="StopPodSandbox for \"452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488\" returns successfully" Feb 9 09:58:18.786665 kubelet[2125]: I0209 09:58:18.786596 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-lib-modules\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787081 kubelet[2125]: I0209 09:58:18.786676 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cni-path\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787081 kubelet[2125]: I0209 09:58:18.786714 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.787081 kubelet[2125]: I0209 09:58:18.786725 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cni-path" (OuterVolumeSpecName: "cni-path") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.787081 kubelet[2125]: I0209 09:58:18.786762 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq4th\" (UniqueName: \"kubernetes.io/projected/7fe61996-f77d-4a14-9f33-f18bd2641718-kube-api-access-wq4th\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787196 kubelet[2125]: I0209 09:58:18.787132 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-xtables-lock\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787196 kubelet[2125]: I0209 09:58:18.787154 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-bpf-maps\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787246 kubelet[2125]: I0209 09:58:18.787203 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.787246 kubelet[2125]: I0209 09:58:18.787215 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-config-path\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787246 kubelet[2125]: I0209 09:58:18.787231 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.787342 kubelet[2125]: I0209 09:58:18.787248 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-run\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787342 kubelet[2125]: I0209 09:58:18.787272 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-cgroup\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787342 kubelet[2125]: I0209 09:58:18.787290 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-hostproc\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787342 kubelet[2125]: I0209 09:58:18.787315 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-ipsec-secrets\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787342 kubelet[2125]: I0209 09:58:18.787316 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.787342 kubelet[2125]: I0209 09:58:18.787337 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.787485 kubelet[2125]: I0209 09:58:18.787340 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fe61996-f77d-4a14-9f33-f18bd2641718-clustermesh-secrets\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787485 kubelet[2125]: I0209 09:58:18.787374 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-host-proc-sys-net\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787485 kubelet[2125]: I0209 09:58:18.787394 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-etc-cni-netd\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787485 kubelet[2125]: I0209 09:58:18.787415 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-host-proc-sys-kernel\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787485 kubelet[2125]: I0209 09:58:18.787435 2125 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fe61996-f77d-4a14-9f33-f18bd2641718-hubble-tls\") pod \"7fe61996-f77d-4a14-9f33-f18bd2641718\" (UID: \"7fe61996-f77d-4a14-9f33-f18bd2641718\") " Feb 9 09:58:18.787485 kubelet[2125]: I0209 09:58:18.787468 2125 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.787485 kubelet[2125]: I0209 09:58:18.787479 2125 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.787638 kubelet[2125]: I0209 09:58:18.787488 2125 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.787638 kubelet[2125]: I0209 09:58:18.787499 2125 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.787638 kubelet[2125]: I0209 09:58:18.787509 2125 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.787638 kubelet[2125]: I0209 09:58:18.787520 2125 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.787638 kubelet[2125]: I0209 09:58:18.787554 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.787638 kubelet[2125]: W0209 09:58:18.787332 2125 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7fe61996-f77d-4a14-9f33-f18bd2641718/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:58:18.788133 kubelet[2125]: I0209 09:58:18.788041 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-hostproc" (OuterVolumeSpecName: "hostproc") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.788133 kubelet[2125]: I0209 09:58:18.788092 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.788133 kubelet[2125]: I0209 09:58:18.788110 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:18.789565 kubelet[2125]: I0209 09:58:18.789292 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:58:18.790058 kubelet[2125]: I0209 09:58:18.790020 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe61996-f77d-4a14-9f33-f18bd2641718-kube-api-access-wq4th" (OuterVolumeSpecName: "kube-api-access-wq4th") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "kube-api-access-wq4th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:18.790328 kubelet[2125]: I0209 09:58:18.790304 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe61996-f77d-4a14-9f33-f18bd2641718-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:58:18.790631 kubelet[2125]: I0209 09:58:18.790585 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe61996-f77d-4a14-9f33-f18bd2641718-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:18.792051 kubelet[2125]: I0209 09:58:18.792022 2125 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7fe61996-f77d-4a14-9f33-f18bd2641718" (UID: "7fe61996-f77d-4a14-9f33-f18bd2641718"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:58:18.888928 kubelet[2125]: I0209 09:58:18.888069 2125 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fe61996-f77d-4a14-9f33-f18bd2641718-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.888928 kubelet[2125]: I0209 09:58:18.888677 2125 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.888928 kubelet[2125]: I0209 09:58:18.888698 2125 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.888928 kubelet[2125]: I0209 09:58:18.888711 2125 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.888928 kubelet[2125]: I0209 09:58:18.888720 2125 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fe61996-f77d-4a14-9f33-f18bd2641718-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.888928 kubelet[2125]: I0209 09:58:18.888731 2125 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-wq4th\" (UniqueName: \"kubernetes.io/projected/7fe61996-f77d-4a14-9f33-f18bd2641718-kube-api-access-wq4th\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.888928 kubelet[2125]: I0209 09:58:18.888746 2125 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.888928 kubelet[2125]: I0209 09:58:18.888758 2125 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fe61996-f77d-4a14-9f33-f18bd2641718-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.889256 kubelet[2125]: I0209 09:58:18.888768 2125 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7fe61996-f77d-4a14-9f33-f18bd2641718-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 09:58:18.984209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488-rootfs.mount: Deactivated successfully. Feb 9 09:58:18.984370 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-452eb23cfa6dcd88a55c250c9577b8dabbff7e9f98588250bb902c8b48c17488-shm.mount: Deactivated successfully. Feb 9 09:58:18.984463 systemd[1]: var-lib-kubelet-pods-7fe61996\x2df77d\x2d4a14\x2d9f33\x2df18bd2641718-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwq4th.mount: Deactivated successfully. Feb 9 09:58:18.984562 systemd[1]: var-lib-kubelet-pods-7fe61996\x2df77d\x2d4a14\x2d9f33\x2df18bd2641718-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:58:18.984653 systemd[1]: var-lib-kubelet-pods-7fe61996\x2df77d\x2d4a14\x2d9f33\x2df18bd2641718-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:58:18.984741 systemd[1]: var-lib-kubelet-pods-7fe61996\x2df77d\x2d4a14\x2d9f33\x2df18bd2641718-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:58:19.371830 kubelet[2125]: E0209 09:58:19.371783 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:19.413287 kubelet[2125]: E0209 09:58:19.413242 2125 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:58:19.565294 kubelet[2125]: I0209 09:58:19.565262 2125 scope.go:115] "RemoveContainer" containerID="f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e" Feb 9 09:58:19.567731 env[1209]: time="2024-02-09T09:58:19.567548099Z" level=info msg="RemoveContainer for \"f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e\"" Feb 9 09:58:19.574597 env[1209]: time="2024-02-09T09:58:19.574460120Z" level=info msg="RemoveContainer for \"f80bc413008432494a7b61890c3065ef2f013b386b743183c2e40d2c03a9a21e\" returns successfully" Feb 9 09:58:19.590535 kubelet[2125]: I0209 09:58:19.590383 2125 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:19.590535 kubelet[2125]: E0209 09:58:19.590448 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fe61996-f77d-4a14-9f33-f18bd2641718" containerName="mount-cgroup" Feb 9 09:58:19.590535 kubelet[2125]: I0209 09:58:19.590473 2125 memory_manager.go:346] "RemoveStaleState removing state" podUID="7fe61996-f77d-4a14-9f33-f18bd2641718" containerName="mount-cgroup" Feb 9 09:58:19.692414 kubelet[2125]: I0209 09:58:19.692307 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-bpf-maps\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692414 kubelet[2125]: I0209 09:58:19.692355 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-xtables-lock\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692414 kubelet[2125]: I0209 09:58:19.692378 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-cilium-run\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692578 kubelet[2125]: I0209 09:58:19.692447 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-hostproc\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692578 kubelet[2125]: I0209 09:58:19.692485 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-cilium-cgroup\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692578 kubelet[2125]: I0209 09:58:19.692507 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-cilium-ipsec-secrets\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692578 kubelet[2125]: I0209 09:58:19.692528 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-host-proc-sys-net\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692680 kubelet[2125]: I0209 09:58:19.692591 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gl7m\" (UniqueName: \"kubernetes.io/projected/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-kube-api-access-6gl7m\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692680 kubelet[2125]: I0209 09:58:19.692620 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-etc-cni-netd\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692680 kubelet[2125]: I0209 09:58:19.692641 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-cni-path\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692680 kubelet[2125]: I0209 09:58:19.692660 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-clustermesh-secrets\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692680 kubelet[2125]: I0209 09:58:19.692680 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-host-proc-sys-kernel\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.692680 kubelet[2125]: I0209 09:58:19.692700 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-hubble-tls\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.693104 kubelet[2125]: I0209 09:58:19.692720 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-lib-modules\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.693104 kubelet[2125]: I0209 09:58:19.692739 2125 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1f25799-9f99-4585-96a3-9e2e1ebda0ce-cilium-config-path\") pod \"cilium-5prtc\" (UID: \"e1f25799-9f99-4585-96a3-9e2e1ebda0ce\") " pod="kube-system/cilium-5prtc" Feb 9 09:58:19.894521 kubelet[2125]: E0209 09:58:19.894487 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:19.895011 env[1209]: time="2024-02-09T09:58:19.894953598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5prtc,Uid:e1f25799-9f99-4585-96a3-9e2e1ebda0ce,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:19.907912 env[1209]: time="2024-02-09T09:58:19.907842596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:19.907912 env[1209]: time="2024-02-09T09:58:19.907887996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:19.907912 env[1209]: time="2024-02-09T09:58:19.907898396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:19.908097 env[1209]: time="2024-02-09T09:58:19.908068037Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5 pid=4245 runtime=io.containerd.runc.v2 Feb 9 09:58:19.944625 env[1209]: time="2024-02-09T09:58:19.944528386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5prtc,Uid:e1f25799-9f99-4585-96a3-9e2e1ebda0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\"" Feb 9 09:58:19.945632 kubelet[2125]: E0209 09:58:19.945603 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:19.947365 env[1209]: time="2024-02-09T09:58:19.947318634Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:58:19.956385 env[1209]: time="2024-02-09T09:58:19.956343301Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d25a831313a93cff43d0c7608153de88742590fd71300e3687c98410bf9f678e\"" Feb 9 09:58:19.956903 env[1209]: time="2024-02-09T09:58:19.956863543Z" level=info msg="StartContainer for \"d25a831313a93cff43d0c7608153de88742590fd71300e3687c98410bf9f678e\"" Feb 9 09:58:20.001427 env[1209]: time="2024-02-09T09:58:20.001383716Z" level=info msg="StartContainer for \"d25a831313a93cff43d0c7608153de88742590fd71300e3687c98410bf9f678e\" returns successfully" Feb 9 09:58:20.024574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d25a831313a93cff43d0c7608153de88742590fd71300e3687c98410bf9f678e-rootfs.mount: Deactivated successfully. Feb 9 09:58:20.032269 env[1209]: time="2024-02-09T09:58:20.032222766Z" level=info msg="shim disconnected" id=d25a831313a93cff43d0c7608153de88742590fd71300e3687c98410bf9f678e Feb 9 09:58:20.032269 env[1209]: time="2024-02-09T09:58:20.032270246Z" level=warning msg="cleaning up after shim disconnected" id=d25a831313a93cff43d0c7608153de88742590fd71300e3687c98410bf9f678e namespace=k8s.io Feb 9 09:58:20.032450 env[1209]: time="2024-02-09T09:58:20.032282326Z" level=info msg="cleaning up dead shim" Feb 9 09:58:20.039080 env[1209]: time="2024-02-09T09:58:20.039040426Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4327 runtime=io.containerd.runc.v2\n" Feb 9 09:58:20.569233 kubelet[2125]: E0209 09:58:20.569203 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:20.571280 env[1209]: time="2024-02-09T09:58:20.571235259Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:58:20.580903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559309553.mount: Deactivated successfully. Feb 9 09:58:20.586279 env[1209]: time="2024-02-09T09:58:20.586232303Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fbacb72431032dad387191c6ed1a071403cdcd92f6fd76d630f811e298c8f867\"" Feb 9 09:58:20.587054 env[1209]: time="2024-02-09T09:58:20.586844785Z" level=info msg="StartContainer for \"fbacb72431032dad387191c6ed1a071403cdcd92f6fd76d630f811e298c8f867\"" Feb 9 09:58:20.634492 env[1209]: time="2024-02-09T09:58:20.634447164Z" level=info msg="StartContainer for \"fbacb72431032dad387191c6ed1a071403cdcd92f6fd76d630f811e298c8f867\" returns successfully" Feb 9 09:58:20.657580 env[1209]: time="2024-02-09T09:58:20.657532791Z" level=info msg="shim disconnected" id=fbacb72431032dad387191c6ed1a071403cdcd92f6fd76d630f811e298c8f867 Feb 9 09:58:20.657809 env[1209]: time="2024-02-09T09:58:20.657789472Z" level=warning msg="cleaning up after shim disconnected" id=fbacb72431032dad387191c6ed1a071403cdcd92f6fd76d630f811e298c8f867 namespace=k8s.io Feb 9 09:58:20.657886 env[1209]: time="2024-02-09T09:58:20.657871432Z" level=info msg="cleaning up dead shim" Feb 9 09:58:20.665475 env[1209]: time="2024-02-09T09:58:20.665440254Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4389 runtime=io.containerd.runc.v2\n" Feb 9 09:58:21.225186 kubelet[2125]: I0209 09:58:21.225152 2125 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:58:21.225056033 +0000 UTC m=+101.991337753 LastTransitionTime:2024-02-09 09:58:21.225056033 +0000 UTC m=+101.991337753 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:58:21.373202 kubelet[2125]: I0209 09:58:21.373163 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7fe61996-f77d-4a14-9f33-f18bd2641718 path="/var/lib/kubelet/pods/7fe61996-f77d-4a14-9f33-f18bd2641718/volumes" Feb 9 09:58:21.571684 kubelet[2125]: E0209 09:58:21.571634 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:21.574818 env[1209]: time="2024-02-09T09:58:21.574674910Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:58:21.587906 env[1209]: time="2024-02-09T09:58:21.587860548Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cfdca4506806e1b7b806217999439cf295d603fc34b69af7350e6792c3a07839\"" Feb 9 09:58:21.588481 env[1209]: time="2024-02-09T09:58:21.588453510Z" level=info msg="StartContainer for \"cfdca4506806e1b7b806217999439cf295d603fc34b69af7350e6792c3a07839\"" Feb 9 09:58:21.639353 env[1209]: time="2024-02-09T09:58:21.639308695Z" level=info msg="StartContainer for \"cfdca4506806e1b7b806217999439cf295d603fc34b69af7350e6792c3a07839\" returns successfully" Feb 9 09:58:21.658669 env[1209]: time="2024-02-09T09:58:21.658625030Z" level=info msg="shim disconnected" id=cfdca4506806e1b7b806217999439cf295d603fc34b69af7350e6792c3a07839 Feb 9 09:58:21.658888 env[1209]: time="2024-02-09T09:58:21.658868590Z" level=warning msg="cleaning up after shim disconnected" id=cfdca4506806e1b7b806217999439cf295d603fc34b69af7350e6792c3a07839 namespace=k8s.io Feb 9 09:58:21.658948 env[1209]: time="2024-02-09T09:58:21.658935351Z" level=info msg="cleaning up dead shim" Feb 9 09:58:21.665948 env[1209]: time="2024-02-09T09:58:21.665907051Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4446 runtime=io.containerd.runc.v2\n" Feb 9 09:58:21.984516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfdca4506806e1b7b806217999439cf295d603fc34b69af7350e6792c3a07839-rootfs.mount: Deactivated successfully. Feb 9 09:58:22.575308 kubelet[2125]: E0209 09:58:22.575282 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:22.578338 env[1209]: time="2024-02-09T09:58:22.578293896Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:58:22.587514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502848383.mount: Deactivated successfully. Feb 9 09:58:22.593891 env[1209]: time="2024-02-09T09:58:22.593834059Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36e9f7fcc7dbc7e8f2d2348bb754c34e89fa547ec0e5ad1ed5e7665c54ff6a8f\"" Feb 9 09:58:22.594552 env[1209]: time="2024-02-09T09:58:22.594523221Z" level=info msg="StartContainer for \"36e9f7fcc7dbc7e8f2d2348bb754c34e89fa547ec0e5ad1ed5e7665c54ff6a8f\"" Feb 9 09:58:22.637410 env[1209]: time="2024-02-09T09:58:22.637352541Z" level=info msg="StartContainer for \"36e9f7fcc7dbc7e8f2d2348bb754c34e89fa547ec0e5ad1ed5e7665c54ff6a8f\" returns successfully" Feb 9 09:58:22.653397 env[1209]: time="2024-02-09T09:58:22.653352545Z" level=info msg="shim disconnected" id=36e9f7fcc7dbc7e8f2d2348bb754c34e89fa547ec0e5ad1ed5e7665c54ff6a8f Feb 9 09:58:22.653635 env[1209]: time="2024-02-09T09:58:22.653615066Z" level=warning msg="cleaning up after shim disconnected" id=36e9f7fcc7dbc7e8f2d2348bb754c34e89fa547ec0e5ad1ed5e7665c54ff6a8f namespace=k8s.io Feb 9 09:58:22.653730 env[1209]: time="2024-02-09T09:58:22.653717026Z" level=info msg="cleaning up dead shim" Feb 9 09:58:22.661280 env[1209]: time="2024-02-09T09:58:22.661240607Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4501 runtime=io.containerd.runc.v2\n" Feb 9 09:58:22.984559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36e9f7fcc7dbc7e8f2d2348bb754c34e89fa547ec0e5ad1ed5e7665c54ff6a8f-rootfs.mount: Deactivated successfully. Feb 9 09:58:23.578729 kubelet[2125]: E0209 09:58:23.578691 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:23.582405 env[1209]: time="2024-02-09T09:58:23.582224018Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:58:23.595929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107522658.mount: Deactivated successfully. Feb 9 09:58:23.599345 env[1209]: time="2024-02-09T09:58:23.598959144Z" level=info msg="CreateContainer within sandbox \"d3efed6ee1ac27cf1c7697a2beaf4e6e7b2260d44aa4e0cb3a4c432c235351c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"533c7b7aa6eeac3a7149a472a6cfdc3b909e553fd8bad0f450e545428a6236fa\"" Feb 9 09:58:23.600150 env[1209]: time="2024-02-09T09:58:23.600105627Z" level=info msg="StartContainer for \"533c7b7aa6eeac3a7149a472a6cfdc3b909e553fd8bad0f450e545428a6236fa\"" Feb 9 09:58:23.652754 env[1209]: time="2024-02-09T09:58:23.652701210Z" level=info msg="StartContainer for \"533c7b7aa6eeac3a7149a472a6cfdc3b909e553fd8bad0f450e545428a6236fa\" returns successfully" Feb 9 09:58:23.881020 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 09:58:23.984645 systemd[1]: run-containerd-runc-k8s.io-533c7b7aa6eeac3a7149a472a6cfdc3b909e553fd8bad0f450e545428a6236fa-runc.b9vDE7.mount: Deactivated successfully. Feb 9 09:58:24.583872 kubelet[2125]: E0209 09:58:24.583831 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:25.585338 kubelet[2125]: E0209 09:58:25.585292 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:26.555740 systemd-networkd[1095]: lxc_health: Link UP Feb 9 09:58:26.564690 systemd-networkd[1095]: lxc_health: Gained carrier Feb 9 09:58:26.565001 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:58:26.587178 kubelet[2125]: E0209 09:58:26.587150 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:27.896919 kubelet[2125]: E0209 09:58:27.896878 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:27.910905 kubelet[2125]: I0209 09:58:27.910857 2125 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5prtc" podStartSLOduration=8.910823568 pod.CreationTimestamp="2024-02-09 09:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:24.597401509 +0000 UTC m=+105.363683229" watchObservedRunningTime="2024-02-09 09:58:27.910823568 +0000 UTC m=+108.677105328" Feb 9 09:58:28.461114 systemd-networkd[1095]: lxc_health: Gained IPv6LL Feb 9 09:58:28.591531 kubelet[2125]: E0209 09:58:28.591469 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:29.371036 kubelet[2125]: E0209 09:58:29.370966 2125 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:32.601215 systemd[1]: run-containerd-runc-k8s.io-533c7b7aa6eeac3a7149a472a6cfdc3b909e553fd8bad0f450e545428a6236fa-runc.ZK6yNg.mount: Deactivated successfully. Feb 9 09:58:32.656439 sshd[4078]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:32.658878 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:49576.service: Deactivated successfully. Feb 9 09:58:32.659698 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 09:58:32.660358 systemd-logind[1197]: Session 26 logged out. Waiting for processes to exit. Feb 9 09:58:32.661052 systemd-logind[1197]: Removed session 26.