Feb 9 09:42:18.768041 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:42:18.768060 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:42:18.768068 kernel: efi: EFI v2.70 by EDK II Feb 9 09:42:18.768073 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 09:42:18.768078 kernel: random: crng init done Feb 9 09:42:18.768083 kernel: ACPI: Early table checksum verification disabled Feb 9 09:42:18.768090 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 09:42:18.768096 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 09:42:18.768102 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768108 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768113 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768118 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768123 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768129 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768136 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768142 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768149 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:18.768155 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 09:42:18.768160 kernel: NUMA: Failed to initialise from firmware Feb 9 09:42:18.768166 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:42:18.768172 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 09:42:18.768178 kernel: Zone ranges: Feb 9 09:42:18.768183 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:42:18.768190 kernel: DMA32 empty Feb 9 09:42:18.768195 kernel: Normal empty Feb 9 09:42:18.768201 kernel: Movable zone start for each node Feb 9 09:42:18.768207 kernel: Early memory node ranges Feb 9 09:42:18.768212 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 09:42:18.768218 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 09:42:18.768224 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 09:42:18.768230 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 09:42:18.768236 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 09:42:18.768241 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 09:42:18.768247 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 09:42:18.768253 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:42:18.768259 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 09:42:18.768266 kernel: psci: probing for conduit method from ACPI. Feb 9 09:42:18.768271 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:42:18.768277 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:42:18.768301 kernel: psci: Trusted OS migration not required Feb 9 09:42:18.768309 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:42:18.768316 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 09:42:18.768323 kernel: ACPI: SRAT not present Feb 9 09:42:18.768330 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:42:18.768336 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:42:18.768342 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 09:42:18.768348 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:42:18.768354 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:42:18.768360 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:42:18.768367 kernel: CPU features: detected: Spectre-v4 Feb 9 09:42:18.768372 kernel: CPU features: detected: Spectre-BHB Feb 9 09:42:18.768379 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:42:18.768386 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:42:18.768392 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:42:18.768397 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 09:42:18.768403 kernel: Policy zone: DMA Feb 9 09:42:18.768410 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:42:18.768417 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:42:18.768423 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:42:18.768429 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:42:18.768435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:42:18.768441 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 09:42:18.768448 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 09:42:18.768454 kernel: trace event string verifier disabled Feb 9 09:42:18.768460 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:42:18.768466 kernel: rcu: RCU event tracing is enabled. Feb 9 09:42:18.768472 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 09:42:18.768479 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:42:18.768484 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:42:18.768490 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:42:18.768496 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 09:42:18.768502 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:42:18.768508 kernel: GICv3: 256 SPIs implemented Feb 9 09:42:18.768515 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:42:18.768521 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:42:18.768527 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:42:18.768533 kernel: GICv3: 16 PPIs implemented Feb 9 09:42:18.768539 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 09:42:18.768544 kernel: ACPI: SRAT not present Feb 9 09:42:18.768550 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 09:42:18.768556 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:42:18.768563 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:42:18.768569 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 09:42:18.768575 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 09:42:18.768581 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:42:18.768588 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:42:18.768594 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:42:18.768600 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:42:18.768606 kernel: arm-pv: using stolen time PV Feb 9 09:42:18.768613 kernel: Console: colour dummy device 80x25 Feb 9 09:42:18.768619 kernel: ACPI: Core revision 20210730 Feb 9 09:42:18.768625 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:42:18.768631 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:42:18.768637 kernel: LSM: Security Framework initializing Feb 9 09:42:18.768643 kernel: SELinux: Initializing. Feb 9 09:42:18.768650 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:42:18.768657 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:42:18.768663 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:42:18.768669 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 09:42:18.768675 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 09:42:18.768681 kernel: Remapping and enabling EFI services. Feb 9 09:42:18.768687 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:42:18.768693 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:42:18.768699 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 09:42:18.768706 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 09:42:18.768713 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:42:18.768719 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:42:18.768725 kernel: Detected PIPT I-cache on CPU2 Feb 9 09:42:18.768731 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 09:42:18.768737 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 09:42:18.768744 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:42:18.768750 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 09:42:18.768756 kernel: Detected PIPT I-cache on CPU3 Feb 9 09:42:18.768762 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 09:42:18.768769 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 09:42:18.768775 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:42:18.768786 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 09:42:18.768793 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 09:42:18.768803 kernel: SMP: Total of 4 processors activated. Feb 9 09:42:18.768811 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:42:18.768818 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:42:18.768824 kernel: CPU features: detected: Common not Private translations Feb 9 09:42:18.768831 kernel: CPU features: detected: CRC32 instructions Feb 9 09:42:18.768837 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:42:18.768844 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:42:18.768850 kernel: CPU features: detected: Privileged Access Never Feb 9 09:42:18.768858 kernel: CPU features: detected: RAS Extension Support Feb 9 09:42:18.768865 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 09:42:18.768871 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:42:18.768877 kernel: alternatives: patching kernel code Feb 9 09:42:18.768885 kernel: devtmpfs: initialized Feb 9 09:42:18.768891 kernel: KASLR enabled Feb 9 09:42:18.768898 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:42:18.768904 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 09:42:18.768911 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:42:18.768917 kernel: SMBIOS 3.0.0 present. Feb 9 09:42:18.768924 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 09:42:18.768930 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:42:18.768937 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:42:18.768945 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:42:18.768953 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:42:18.768960 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:42:18.768966 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Feb 9 09:42:18.768973 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:42:18.768979 kernel: cpuidle: using governor menu Feb 9 09:42:18.768986 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:42:18.768993 kernel: ASID allocator initialised with 32768 entries Feb 9 09:42:18.769000 kernel: ACPI: bus type PCI registered Feb 9 09:42:18.769006 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:42:18.769014 kernel: Serial: AMBA PL011 UART driver Feb 9 09:42:18.769020 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:42:18.769027 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:42:18.769033 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:42:18.769040 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:42:18.769046 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:42:18.769053 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:42:18.769059 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:42:18.769065 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:42:18.769073 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:42:18.769079 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:42:18.769086 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:42:18.769093 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:42:18.769099 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:42:18.769106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:42:18.769112 kernel: ACPI: Interpreter enabled Feb 9 09:42:18.769121 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:42:18.769127 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:42:18.769135 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:42:18.769141 kernel: printk: console [ttyAMA0] enabled Feb 9 09:42:18.769148 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 09:42:18.769273 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:42:18.769363 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:42:18.769426 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:42:18.769485 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 09:42:18.769547 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 09:42:18.769556 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 09:42:18.769563 kernel: PCI host bridge to bus 0000:00 Feb 9 09:42:18.769629 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 09:42:18.769684 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:42:18.769739 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 09:42:18.769802 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 09:42:18.770519 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 09:42:18.770604 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 09:42:18.770669 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 09:42:18.770730 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 09:42:18.770804 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:42:18.770871 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:42:18.770931 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 09:42:18.770997 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 09:42:18.771054 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 09:42:18.771108 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:42:18.771163 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 09:42:18.771171 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:42:18.771178 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:42:18.771185 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:42:18.771193 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:42:18.771206 kernel: iommu: Default domain type: Translated Feb 9 09:42:18.771218 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:42:18.771224 kernel: vgaarb: loaded Feb 9 09:42:18.771231 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:42:18.771239 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:42:18.771245 kernel: PTP clock support registered Feb 9 09:42:18.771252 kernel: Registered efivars operations Feb 9 09:42:18.771259 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:42:18.771265 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:42:18.771273 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:42:18.771293 kernel: pnp: PnP ACPI init Feb 9 09:42:18.771378 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 09:42:18.771388 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:42:18.771395 kernel: NET: Registered PF_INET protocol family Feb 9 09:42:18.771402 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:42:18.771409 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:42:18.771415 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:42:18.771424 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:42:18.771430 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:42:18.771437 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:42:18.771444 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:42:18.771451 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:42:18.771457 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:42:18.771464 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:42:18.771470 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 09:42:18.771479 kernel: kvm [1]: HYP mode not available Feb 9 09:42:18.771486 kernel: Initialise system trusted keyrings Feb 9 09:42:18.771493 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:42:18.771500 kernel: Key type asymmetric registered Feb 9 09:42:18.771506 kernel: Asymmetric key parser 'x509' registered Feb 9 09:42:18.771513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:42:18.771519 kernel: io scheduler mq-deadline registered Feb 9 09:42:18.771526 kernel: io scheduler kyber registered Feb 9 09:42:18.771532 kernel: io scheduler bfq registered Feb 9 09:42:18.771539 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:42:18.771547 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:42:18.771553 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:42:18.772176 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 09:42:18.772198 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:42:18.772205 kernel: thunder_xcv, ver 1.0 Feb 9 09:42:18.772212 kernel: thunder_bgx, ver 1.0 Feb 9 09:42:18.772219 kernel: nicpf, ver 1.0 Feb 9 09:42:18.772228 kernel: nicvf, ver 1.0 Feb 9 09:42:18.772335 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:42:18.772411 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:42:18 UTC (1707471738) Feb 9 09:42:18.772421 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:42:18.772427 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:42:18.772434 kernel: Segment Routing with IPv6 Feb 9 09:42:18.772443 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:42:18.772450 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:42:18.772457 kernel: Key type dns_resolver registered Feb 9 09:42:18.772464 kernel: registered taskstats version 1 Feb 9 09:42:18.772472 kernel: Loading compiled-in X.509 certificates Feb 9 09:42:18.772480 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:42:18.772486 kernel: Key type .fscrypt registered Feb 9 09:42:18.772493 kernel: Key type fscrypt-provisioning registered Feb 9 09:42:18.772500 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:42:18.772506 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:42:18.772515 kernel: ima: No architecture policies found Feb 9 09:42:18.772522 kernel: Freeing unused kernel memory: 34688K Feb 9 09:42:18.772528 kernel: Run /init as init process Feb 9 09:42:18.772536 kernel: with arguments: Feb 9 09:42:18.772543 kernel: /init Feb 9 09:42:18.772549 kernel: with environment: Feb 9 09:42:18.772555 kernel: HOME=/ Feb 9 09:42:18.772562 kernel: TERM=linux Feb 9 09:42:18.772570 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:42:18.772578 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:42:18.772587 systemd[1]: Detected virtualization kvm. Feb 9 09:42:18.772596 systemd[1]: Detected architecture arm64. Feb 9 09:42:18.772603 systemd[1]: Running in initrd. Feb 9 09:42:18.772611 systemd[1]: No hostname configured, using default hostname. Feb 9 09:42:18.772628 systemd[1]: Hostname set to . Feb 9 09:42:18.772636 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:42:18.772643 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:42:18.772650 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:42:18.772657 systemd[1]: Reached target cryptsetup.target. Feb 9 09:42:18.772667 systemd[1]: Reached target paths.target. Feb 9 09:42:18.772674 systemd[1]: Reached target slices.target. Feb 9 09:42:18.772680 systemd[1]: Reached target swap.target. Feb 9 09:42:18.772687 systemd[1]: Reached target timers.target. Feb 9 09:42:18.772694 systemd[1]: Listening on iscsid.socket. Feb 9 09:42:18.772701 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:42:18.772709 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:42:18.772717 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:42:18.772724 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:42:18.772731 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:42:18.772738 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:42:18.772746 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:42:18.772753 systemd[1]: Reached target sockets.target. Feb 9 09:42:18.772764 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:42:18.772771 systemd[1]: Finished network-cleanup.service. Feb 9 09:42:18.772785 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:42:18.772795 systemd[1]: Starting systemd-journald.service... Feb 9 09:42:18.772803 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:42:18.772810 systemd[1]: Starting systemd-resolved.service... Feb 9 09:42:18.772817 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:42:18.772825 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:42:18.772832 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:42:18.772839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:42:18.772846 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:42:18.772853 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:42:18.772861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:42:18.772869 kernel: audit: type=1130 audit(1707471738.769:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.772876 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:42:18.772887 systemd-journald[290]: Journal started Feb 9 09:42:18.772933 systemd-journald[290]: Runtime Journal (/run/log/journal/04309ce876074e0d9ca2c8a0e44a27ad) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:42:18.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.755039 systemd-modules-load[291]: Inserted module 'overlay' Feb 9 09:42:18.778259 kernel: Bridge firewalling registered Feb 9 09:42:18.778304 systemd[1]: Started systemd-journald.service. Feb 9 09:42:18.778317 kernel: audit: type=1130 audit(1707471738.777:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.776454 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 9 09:42:18.781119 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:42:18.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.782787 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:42:18.784975 kernel: audit: type=1130 audit(1707471738.781:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.789298 kernel: SCSI subsystem initialized Feb 9 09:42:18.792791 systemd-resolved[292]: Positive Trust Anchors: Feb 9 09:42:18.792808 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:42:18.792836 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:42:18.797842 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 9 09:42:18.805381 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:42:18.805402 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:42:18.805412 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:42:18.805430 kernel: audit: type=1130 audit(1707471738.801:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.798687 systemd[1]: Started systemd-resolved.service. Feb 9 09:42:18.801813 systemd[1]: Reached target nss-lookup.target. Feb 9 09:42:18.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.805384 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 9 09:42:18.810427 kernel: audit: type=1130 audit(1707471738.806:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.806081 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:42:18.807710 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:42:18.811670 dracut-cmdline[308]: dracut-dracut-053 Feb 9 09:42:18.813578 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:42:18.818229 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:42:18.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.821303 kernel: audit: type=1130 audit(1707471738.818:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.870305 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:42:18.878504 kernel: iscsi: registered transport (tcp) Feb 9 09:42:18.893372 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:42:18.893400 kernel: QLogic iSCSI HBA Driver Feb 9 09:42:18.927097 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:42:18.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.928643 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:42:18.931175 kernel: audit: type=1130 audit(1707471738.927:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:18.973311 kernel: raid6: neonx8 gen() 13025 MB/s Feb 9 09:42:18.990306 kernel: raid6: neonx8 xor() 10793 MB/s Feb 9 09:42:19.007302 kernel: raid6: neonx4 gen() 12738 MB/s Feb 9 09:42:19.024297 kernel: raid6: neonx4 xor() 10489 MB/s Feb 9 09:42:19.041290 kernel: raid6: neonx2 gen() 12947 MB/s Feb 9 09:42:19.058303 kernel: raid6: neonx2 xor() 9573 MB/s Feb 9 09:42:19.075297 kernel: raid6: neonx1 gen() 10480 MB/s Feb 9 09:42:19.092298 kernel: raid6: neonx1 xor() 8787 MB/s Feb 9 09:42:19.109300 kernel: raid6: int64x8 gen() 6295 MB/s Feb 9 09:42:19.126308 kernel: raid6: int64x8 xor() 3550 MB/s Feb 9 09:42:19.143301 kernel: raid6: int64x4 gen() 7249 MB/s Feb 9 09:42:19.160297 kernel: raid6: int64x4 xor() 3855 MB/s Feb 9 09:42:19.177299 kernel: raid6: int64x2 gen() 6155 MB/s Feb 9 09:42:19.194323 kernel: raid6: int64x2 xor() 3314 MB/s Feb 9 09:42:19.211323 kernel: raid6: int64x1 gen() 5022 MB/s Feb 9 09:42:19.228651 kernel: raid6: int64x1 xor() 2616 MB/s Feb 9 09:42:19.228708 kernel: raid6: using algorithm neonx8 gen() 13025 MB/s Feb 9 09:42:19.228719 kernel: raid6: .... xor() 10793 MB/s, rmw enabled Feb 9 09:42:19.228728 kernel: raid6: using neon recovery algorithm Feb 9 09:42:19.242301 kernel: xor: measuring software checksum speed Feb 9 09:42:19.243298 kernel: 8regs : 17286 MB/sec Feb 9 09:42:19.244618 kernel: 32regs : 20749 MB/sec Feb 9 09:42:19.244635 kernel: arm64_neon : 27939 MB/sec Feb 9 09:42:19.244644 kernel: xor: using function: arm64_neon (27939 MB/sec) Feb 9 09:42:19.308326 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:42:19.321703 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:42:19.325471 kernel: audit: type=1130 audit(1707471739.322:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.325494 kernel: audit: type=1334 audit(1707471739.324:10): prog-id=7 op=LOAD Feb 9 09:42:19.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.324000 audit: BPF prog-id=7 op=LOAD Feb 9 09:42:19.325000 audit: BPF prog-id=8 op=LOAD Feb 9 09:42:19.325791 systemd[1]: Starting systemd-udevd.service... Feb 9 09:42:19.342014 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 9 09:42:19.345373 systemd[1]: Started systemd-udevd.service. Feb 9 09:42:19.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.347167 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:42:19.359672 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Feb 9 09:42:19.387303 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:42:19.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.388749 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:42:19.424657 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:42:19.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.453400 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 09:42:19.456563 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:42:19.456593 kernel: GPT:9289727 != 19775487 Feb 9 09:42:19.456603 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:42:19.456612 kernel: GPT:9289727 != 19775487 Feb 9 09:42:19.457341 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:42:19.457363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:42:19.477897 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:42:19.480437 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) Feb 9 09:42:19.481697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:42:19.482602 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:42:19.486606 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:42:19.494293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:42:19.495968 systemd[1]: Starting disk-uuid.service... Feb 9 09:42:19.502006 disk-uuid[565]: Primary Header is updated. Feb 9 09:42:19.502006 disk-uuid[565]: Secondary Entries is updated. Feb 9 09:42:19.502006 disk-uuid[565]: Secondary Header is updated. Feb 9 09:42:19.505294 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:42:20.515763 disk-uuid[566]: The operation has completed successfully. Feb 9 09:42:20.516860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:42:20.542880 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:42:20.542973 systemd[1]: Finished disk-uuid.service. Feb 9 09:42:20.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.544474 systemd[1]: Starting verity-setup.service... Feb 9 09:42:20.559862 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:42:20.582053 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:42:20.583540 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:42:20.584266 systemd[1]: Finished verity-setup.service. Feb 9 09:42:20.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.630296 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:42:20.630436 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:42:20.631185 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:42:20.631898 systemd[1]: Starting ignition-setup.service... Feb 9 09:42:20.634453 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:42:20.640627 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:42:20.640666 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:42:20.640680 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:42:20.649841 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:42:20.656070 systemd[1]: Finished ignition-setup.service. Feb 9 09:42:20.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.657857 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:42:20.714407 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:42:20.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.715000 audit: BPF prog-id=9 op=LOAD Feb 9 09:42:20.716220 systemd[1]: Starting systemd-networkd.service... Feb 9 09:42:20.737800 ignition[656]: Ignition 2.14.0 Feb 9 09:42:20.737810 ignition[656]: Stage: fetch-offline Feb 9 09:42:20.737859 ignition[656]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:20.737871 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:20.738001 ignition[656]: parsed url from cmdline: "" Feb 9 09:42:20.738005 ignition[656]: no config URL provided Feb 9 09:42:20.738009 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:42:20.738017 ignition[656]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:42:20.741251 systemd-networkd[742]: lo: Link UP Feb 9 09:42:20.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.738035 ignition[656]: op(1): [started] loading QEMU firmware config module Feb 9 09:42:20.741255 systemd-networkd[742]: lo: Gained carrier Feb 9 09:42:20.738040 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 09:42:20.741621 systemd-networkd[742]: Enumeration completed Feb 9 09:42:20.747211 ignition[656]: op(1): [finished] loading QEMU firmware config module Feb 9 09:42:20.741733 systemd[1]: Started systemd-networkd.service. Feb 9 09:42:20.741817 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:42:20.742870 systemd-networkd[742]: eth0: Link UP Feb 9 09:42:20.742873 systemd-networkd[742]: eth0: Gained carrier Feb 9 09:42:20.744336 systemd[1]: Reached target network.target. Feb 9 09:42:20.746353 systemd[1]: Starting iscsiuio.service... Feb 9 09:42:20.755087 systemd[1]: Started iscsiuio.service. Feb 9 09:42:20.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.756955 systemd[1]: Starting iscsid.service... Feb 9 09:42:20.760221 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:42:20.760221 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:42:20.760221 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:42:20.760221 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:42:20.760221 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:42:20.760221 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:42:20.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.762978 systemd[1]: Started iscsid.service. Feb 9 09:42:20.763366 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:42:20.767408 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:42:20.778156 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:42:20.779160 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:42:20.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.780448 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:42:20.781952 systemd[1]: Reached target remote-fs.target. Feb 9 09:42:20.784025 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:42:20.791445 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:42:20.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.833324 ignition[656]: parsing config with SHA512: 8fa738459b5f3c7a6e125cbb0fd71868bd646cfd508182ea35531a31fb754fdc95562a99638adb779c5d2024dfc9ed691ebccfe6387c3882390fb3f15ddf011c Feb 9 09:42:20.875101 unknown[656]: fetched base config from "system" Feb 9 09:42:20.875112 unknown[656]: fetched user config from "qemu" Feb 9 09:42:20.875712 ignition[656]: fetch-offline: fetch-offline passed Feb 9 09:42:20.875767 ignition[656]: Ignition finished successfully Feb 9 09:42:20.877338 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:42:20.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.878231 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:42:20.879011 systemd[1]: Starting ignition-kargs.service... Feb 9 09:42:20.887312 ignition[763]: Ignition 2.14.0 Feb 9 09:42:20.887322 ignition[763]: Stage: kargs Feb 9 09:42:20.887414 ignition[763]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:20.887424 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:20.888506 ignition[763]: kargs: kargs passed Feb 9 09:42:20.890486 systemd[1]: Finished ignition-kargs.service. Feb 9 09:42:20.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.888546 ignition[763]: Ignition finished successfully Feb 9 09:42:20.892200 systemd[1]: Starting ignition-disks.service... Feb 9 09:42:20.898610 ignition[769]: Ignition 2.14.0 Feb 9 09:42:20.898619 ignition[769]: Stage: disks Feb 9 09:42:20.898707 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:20.898717 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:20.901159 systemd[1]: Finished ignition-disks.service. Feb 9 09:42:20.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.899722 ignition[769]: disks: disks passed Feb 9 09:42:20.902171 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:42:20.899766 ignition[769]: Ignition finished successfully Feb 9 09:42:20.903064 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:42:20.903940 systemd[1]: Reached target local-fs.target. Feb 9 09:42:20.907348 systemd[1]: Reached target sysinit.target. Feb 9 09:42:20.907894 systemd[1]: Reached target basic.target. Feb 9 09:42:20.909671 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:42:20.920030 systemd-fsck[777]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:42:20.923623 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:42:20.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.926718 systemd[1]: Mounting sysroot.mount... Feb 9 09:42:20.932029 systemd[1]: Mounted sysroot.mount. Feb 9 09:42:20.933222 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:42:20.932813 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:42:20.935143 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:42:20.936022 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:42:20.936061 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:42:20.936083 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:42:20.937763 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:42:20.939216 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:42:20.943534 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:42:20.947566 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:42:20.951603 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:42:20.955351 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:42:20.980856 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:42:20.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.982387 systemd[1]: Starting ignition-mount.service... Feb 9 09:42:20.983621 systemd[1]: Starting sysroot-boot.service... Feb 9 09:42:20.987802 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 09:42:20.995231 ignition[830]: INFO : Ignition 2.14.0 Feb 9 09:42:20.995958 ignition[830]: INFO : Stage: mount Feb 9 09:42:20.996582 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:20.997308 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:20.999300 ignition[830]: INFO : mount: mount passed Feb 9 09:42:20.999931 ignition[830]: INFO : Ignition finished successfully Feb 9 09:42:21.001336 systemd[1]: Finished ignition-mount.service. Feb 9 09:42:21.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.005871 systemd[1]: Finished sysroot-boot.service. Feb 9 09:42:21.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.592033 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:42:21.598478 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (839) Feb 9 09:42:21.598508 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:42:21.599486 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:42:21.599531 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:42:21.602190 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:42:21.603743 systemd[1]: Starting ignition-files.service... Feb 9 09:42:21.617987 ignition[859]: INFO : Ignition 2.14.0 Feb 9 09:42:21.617987 ignition[859]: INFO : Stage: files Feb 9 09:42:21.619213 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:21.619213 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:21.620966 ignition[859]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:42:21.620966 ignition[859]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:42:21.620966 ignition[859]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:42:21.624142 ignition[859]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:42:21.624142 ignition[859]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:42:21.624142 ignition[859]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:42:21.624002 unknown[859]: wrote ssh authorized keys file for user: core Feb 9 09:42:21.628877 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:42:21.628877 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:42:21.660457 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.12 Feb 9 09:42:21.660474 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Feb 9 09:42:21.883369 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:42:21.924786 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:42:21.924786 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:42:21.927568 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:42:22.292559 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:42:22.412887 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:42:22.414851 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:42:22.414851 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:42:22.414851 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:42:22.636498 systemd-networkd[742]: eth0: Gained IPv6LL Feb 9 09:42:22.680678 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:42:22.976860 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:42:22.979002 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:42:22.979002 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:42:22.979002 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:42:22.979002 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:42:22.979002 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:42:23.081852 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:42:26.221927 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:42:26.221927 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:42:26.221927 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:42:26.221927 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:42:26.308792 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:42:33.128513 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:42:33.130800 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:42:33.130800 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:42:33.130800 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:42:33.153347 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 09:42:33.413998 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:42:33.413998 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:42:33.417581 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:42:33.417581 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 09:42:33.687921 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 09:42:33.781717 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:42:33.783114 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:42:33.783114 ignition[859]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 9 09:42:33.783114 ignition[859]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:42:33.783114 ignition[859]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:42:33.783114 ignition[859]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 9 09:42:33.783114 ignition[859]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:42:33.783114 ignition[859]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:42:33.783114 ignition[859]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 09:42:33.806758 ignition[859]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:42:33.830018 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 09:42:33.830040 kernel: audit: type=1130 audit(1707471753.824:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.830114 ignition[859]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:42:33.830114 ignition[859]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 09:42:33.830114 ignition[859]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:42:33.830114 ignition[859]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:42:33.830114 ignition[859]: INFO : files: files passed Feb 9 09:42:33.830114 ignition[859]: INFO : Ignition finished successfully Feb 9 09:42:33.843238 kernel: audit: type=1130 audit(1707471753.833:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.843258 kernel: audit: type=1131 audit(1707471753.833:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.843268 kernel: audit: type=1130 audit(1707471753.838:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.823551 systemd[1]: Finished ignition-files.service. Feb 9 09:42:33.825977 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:42:33.845296 initrd-setup-root-after-ignition[884]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 09:42:33.829008 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:42:33.847755 initrd-setup-root-after-ignition[887]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:42:33.829736 systemd[1]: Starting ignition-quench.service... Feb 9 09:42:33.832596 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:42:33.832686 systemd[1]: Finished ignition-quench.service. Feb 9 09:42:33.836501 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:42:33.838738 systemd[1]: Reached target ignition-complete.target. Feb 9 09:42:33.843093 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:42:33.856723 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:42:33.856850 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:42:33.862194 kernel: audit: type=1130 audit(1707471753.857:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.862217 kernel: audit: type=1131 audit(1707471753.857:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.858240 systemd[1]: Reached target initrd-fs.target. Feb 9 09:42:33.862879 systemd[1]: Reached target initrd.target. Feb 9 09:42:33.863931 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:42:33.864837 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:42:33.875591 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:42:33.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.877129 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:42:33.879743 kernel: audit: type=1130 audit(1707471753.876:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.885504 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:42:33.886393 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:42:33.887674 systemd[1]: Stopped target timers.target. Feb 9 09:42:33.888819 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:42:33.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.888951 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:42:33.893295 kernel: audit: type=1131 audit(1707471753.889:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.890028 systemd[1]: Stopped target initrd.target. Feb 9 09:42:33.892907 systemd[1]: Stopped target basic.target. Feb 9 09:42:33.893910 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:42:33.895052 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:42:33.896187 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:42:33.897498 systemd[1]: Stopped target remote-fs.target. Feb 9 09:42:33.899994 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:42:33.901039 systemd[1]: Stopped target sysinit.target. Feb 9 09:42:33.902015 systemd[1]: Stopped target local-fs.target. Feb 9 09:42:33.903132 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:42:33.904224 systemd[1]: Stopped target swap.target. Feb 9 09:42:33.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.905346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:42:33.910163 kernel: audit: type=1131 audit(1707471753.906:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.905469 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:42:33.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.906820 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:42:33.914424 kernel: audit: type=1131 audit(1707471753.910:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.909638 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:42:33.909752 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:42:33.910976 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:42:33.911072 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:42:33.914069 systemd[1]: Stopped target paths.target. Feb 9 09:42:33.915175 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:42:33.920334 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:42:33.921221 systemd[1]: Stopped target slices.target. Feb 9 09:42:33.922477 systemd[1]: Stopped target sockets.target. Feb 9 09:42:33.923582 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:42:33.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.923699 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:42:33.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.925057 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:42:33.925152 systemd[1]: Stopped ignition-files.service. Feb 9 09:42:33.930155 iscsid[748]: iscsid shutting down. Feb 9 09:42:33.927149 systemd[1]: Stopping ignition-mount.service... Feb 9 09:42:33.928306 systemd[1]: Stopping iscsid.service... Feb 9 09:42:33.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.930566 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:42:33.934192 ignition[900]: INFO : Ignition 2.14.0 Feb 9 09:42:33.934192 ignition[900]: INFO : Stage: umount Feb 9 09:42:33.934192 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:33.934192 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:33.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.930691 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:42:33.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.939138 ignition[900]: INFO : umount: umount passed Feb 9 09:42:33.939138 ignition[900]: INFO : Ignition finished successfully Feb 9 09:42:33.932558 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:42:33.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.935895 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:42:33.936046 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:42:33.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.937150 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:42:33.937251 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:42:33.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.939971 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:42:33.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.940060 systemd[1]: Stopped iscsid.service. Feb 9 09:42:33.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.942171 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:42:33.942694 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:42:33.942786 systemd[1]: Stopped ignition-mount.service. Feb 9 09:42:33.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.944745 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:42:33.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.944822 systemd[1]: Closed iscsid.socket. Feb 9 09:42:33.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.945367 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:42:33.945406 systemd[1]: Stopped ignition-disks.service. Feb 9 09:42:33.946663 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:42:33.946705 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:42:33.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.947710 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:42:33.947759 systemd[1]: Stopped ignition-setup.service. Feb 9 09:42:33.949419 systemd[1]: Stopping iscsiuio.service... Feb 9 09:42:33.950630 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:42:33.950719 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:42:33.951856 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:42:33.951936 systemd[1]: Stopped iscsiuio.service. Feb 9 09:42:33.952945 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:42:33.953022 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:42:33.954610 systemd[1]: Stopped target network.target. Feb 9 09:42:33.955691 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:42:33.955724 systemd[1]: Closed iscsiuio.socket. Feb 9 09:42:33.956697 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:42:33.956747 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:42:33.957979 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:42:33.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.958964 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:42:33.966331 systemd-networkd[742]: eth0: DHCPv6 lease lost Feb 9 09:42:33.967493 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:42:33.967590 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:42:33.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.968993 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:42:33.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.969026 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:42:33.971622 systemd[1]: Stopping network-cleanup.service... Feb 9 09:42:33.977000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:42:33.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.972829 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:42:33.972881 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:42:33.974351 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:42:33.974388 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:42:33.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.976228 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:42:33.976267 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:42:33.977711 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:42:33.979622 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:42:33.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.980156 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:42:33.980247 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:42:33.983659 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:42:33.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.983793 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:42:33.989000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:42:33.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.984910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:42:33.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.984948 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:42:33.986232 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:42:33.986262 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:42:33.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.987407 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:42:33.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.987446 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:42:33.988578 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:42:33.988621 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:42:33.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.989924 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:42:33.989966 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:42:33.991839 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:42:33.993023 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:42:33.993081 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:42:33.994868 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:42:33.994978 systemd[1]: Stopped network-cleanup.service. Feb 9 09:42:33.997375 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:42:33.997470 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:42:33.998761 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:42:34.000632 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:42:34.008000 systemd[1]: Switching root. Feb 9 09:42:34.027601 systemd-journald[290]: Journal stopped Feb 9 09:42:35.994542 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 9 09:42:35.994603 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:42:35.994620 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:42:35.994634 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:42:35.994645 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:42:35.994654 kernel: SELinux: policy capability open_perms=1 Feb 9 09:42:35.994666 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:42:35.994675 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:42:35.994685 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:42:35.994694 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:42:35.994703 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:42:35.994712 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:42:35.994732 systemd[1]: Successfully loaded SELinux policy in 34.650ms. Feb 9 09:42:35.994758 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.067ms. Feb 9 09:42:35.994770 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:42:35.994781 systemd[1]: Detected virtualization kvm. Feb 9 09:42:35.994792 systemd[1]: Detected architecture arm64. Feb 9 09:42:35.994802 systemd[1]: Detected first boot. Feb 9 09:42:35.994813 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:42:35.994825 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:42:35.994836 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:42:35.994850 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:42:35.994863 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:42:35.994875 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:42:35.994886 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:42:35.994898 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:42:35.994911 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:42:35.994922 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:42:35.994933 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:42:35.994943 systemd[1]: Created slice system-getty.slice. Feb 9 09:42:35.994953 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:42:35.994964 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:42:35.994974 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:42:35.994985 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:42:35.994995 systemd[1]: Created slice user.slice. Feb 9 09:42:35.995007 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:42:35.995018 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:42:35.995029 systemd[1]: Set up automount boot.automount. Feb 9 09:42:35.995040 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:42:35.995050 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:42:35.995060 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:42:35.995071 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:42:35.995083 systemd[1]: Reached target integritysetup.target. Feb 9 09:42:35.995095 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:42:35.995106 systemd[1]: Reached target remote-fs.target. Feb 9 09:42:35.995117 systemd[1]: Reached target slices.target. Feb 9 09:42:35.995128 systemd[1]: Reached target swap.target. Feb 9 09:42:35.995138 systemd[1]: Reached target torcx.target. Feb 9 09:42:35.995149 systemd[1]: Reached target veritysetup.target. Feb 9 09:42:35.995159 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:42:35.995170 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:42:35.995191 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:42:35.995203 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:42:35.995213 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:42:35.995226 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:42:35.995239 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:42:35.995250 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:42:35.995260 systemd[1]: Mounting media.mount... Feb 9 09:42:35.995271 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:42:35.995296 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:42:35.995308 systemd[1]: Mounting tmp.mount... Feb 9 09:42:35.995320 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:42:35.995331 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:42:35.995342 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:42:35.995353 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:42:35.995364 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:42:35.995374 systemd[1]: Starting modprobe@drm.service... Feb 9 09:42:35.995385 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:42:35.995396 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:42:35.995406 systemd[1]: Starting modprobe@loop.service... Feb 9 09:42:35.995419 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:42:35.995429 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:42:35.995441 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:42:35.995451 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:42:35.995461 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:42:35.995471 systemd[1]: Stopped systemd-journald.service. Feb 9 09:42:35.995483 systemd[1]: Starting systemd-journald.service... Feb 9 09:42:35.995494 kernel: fuse: init (API version 7.34) Feb 9 09:42:35.995504 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:42:35.995516 kernel: loop: module loaded Feb 9 09:42:35.995527 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:42:35.995540 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:42:35.995551 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:42:35.995561 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:42:35.995571 systemd[1]: Stopped verity-setup.service. Feb 9 09:42:35.995585 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:42:35.995595 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:42:35.995606 systemd[1]: Mounted media.mount. Feb 9 09:42:35.995617 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:42:35.995628 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:42:35.995638 systemd[1]: Mounted tmp.mount. Feb 9 09:42:35.995650 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:42:35.995660 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:42:35.995671 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:42:35.995681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:42:35.995691 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:42:35.995702 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:42:35.995714 systemd[1]: Finished modprobe@drm.service. Feb 9 09:42:35.995743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:42:35.995757 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:42:35.995770 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:42:35.995781 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:42:35.995791 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:42:35.995801 systemd[1]: Finished modprobe@loop.service. Feb 9 09:42:35.995815 systemd-journald[995]: Journal started Feb 9 09:42:35.995863 systemd-journald[995]: Runtime Journal (/run/log/journal/04309ce876074e0d9ca2c8a0e44a27ad) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:42:34.095000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:42:34.131000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:42:34.131000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:42:34.131000 audit: BPF prog-id=10 op=LOAD Feb 9 09:42:34.131000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:42:34.131000 audit: BPF prog-id=11 op=LOAD Feb 9 09:42:34.131000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:42:34.180000 audit[933]: AVC avc: denied { associate } for pid=933 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:42:34.180000 audit[933]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cd8b2 a1=40000d0de0 a2=40000d70c0 a3=32 items=0 ppid=916 pid=933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:34.180000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:42:34.181000 audit[933]: AVC avc: denied { associate } for pid=933 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:42:34.181000 audit[933]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cd989 a2=1ed a3=0 items=2 ppid=916 pid=933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:34.181000 audit: CWD cwd="/" Feb 9 09:42:34.181000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:42:34.181000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:42:34.181000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:42:35.855000 audit: BPF prog-id=12 op=LOAD Feb 9 09:42:35.855000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:42:35.855000 audit: BPF prog-id=13 op=LOAD Feb 9 09:42:35.855000 audit: BPF prog-id=14 op=LOAD Feb 9 09:42:35.855000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:42:35.855000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:42:35.856000 audit: BPF prog-id=15 op=LOAD Feb 9 09:42:35.856000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:42:35.856000 audit: BPF prog-id=16 op=LOAD Feb 9 09:42:35.856000 audit: BPF prog-id=17 op=LOAD Feb 9 09:42:35.856000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:42:35.856000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:42:35.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.870000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:42:35.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.951000 audit: BPF prog-id=18 op=LOAD Feb 9 09:42:35.952000 audit: BPF prog-id=19 op=LOAD Feb 9 09:42:35.953000 audit: BPF prog-id=20 op=LOAD Feb 9 09:42:35.953000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:42:35.953000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:42:35.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.993000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:42:35.993000 audit[995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc60ed2c0 a2=4000 a3=1 items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:35.993000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:42:35.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:34.178625 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:42:35.848832 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:42:34.179121 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:42:35.848845 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 09:42:34.179139 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:42:35.857201 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:42:34.179170 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:42:34.179180 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:42:34.179217 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:42:34.179228 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:42:34.179439 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:42:34.179475 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:42:34.179488 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:42:34.179985 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:42:34.180018 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:42:34.180036 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:42:34.180050 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:42:34.180066 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:42:34.180079 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:42:35.603045 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:42:35.603310 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:42:35.998641 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:42:35.603414 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:42:35.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.603565 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:42:35.603614 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:42:35.603669 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:42:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:42:36.000328 systemd[1]: Started systemd-journald.service. Feb 9 09:42:35.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.000911 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:42:36.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.002264 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:42:36.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.003574 systemd[1]: Reached target network-pre.target. Feb 9 09:42:36.005643 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:42:36.007567 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:42:36.008333 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:42:36.011703 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:42:36.013627 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:42:36.014499 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:42:36.015601 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:42:36.016423 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.017519 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:42:36.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.020777 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:42:36.021733 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:42:36.022679 systemd-journald[995]: Time spent on flushing to /var/log/journal/04309ce876074e0d9ca2c8a0e44a27ad is 21.202ms for 1029 entries. Feb 9 09:42:36.022679 systemd-journald[995]: System Journal (/var/log/journal/04309ce876074e0d9ca2c8a0e44a27ad) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:42:36.105547 systemd-journald[995]: Received client request to flush runtime journal. Feb 9 09:42:36.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.022689 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:42:36.025909 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:42:36.028509 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:42:36.107166 udevadm[1034]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:42:36.029931 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:42:36.030786 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:42:36.032716 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:42:36.033596 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:42:36.049998 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:42:36.108402 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:42:36.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.383936 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:42:36.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.386087 systemd[1]: Starting systemd-udevd.service... Feb 9 09:42:36.385000 audit: BPF prog-id=21 op=LOAD Feb 9 09:42:36.385000 audit: BPF prog-id=22 op=LOAD Feb 9 09:42:36.385000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:42:36.385000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:42:36.406954 systemd-udevd[1036]: Using default interface naming scheme 'v252'. Feb 9 09:42:36.418435 systemd[1]: Started systemd-udevd.service. Feb 9 09:42:36.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.420000 audit: BPF prog-id=23 op=LOAD Feb 9 09:42:36.422493 systemd[1]: Starting systemd-networkd.service... Feb 9 09:42:36.442000 audit: BPF prog-id=24 op=LOAD Feb 9 09:42:36.446000 audit: BPF prog-id=25 op=LOAD Feb 9 09:42:36.446000 audit: BPF prog-id=26 op=LOAD Feb 9 09:42:36.447483 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:42:36.467654 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:42:36.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.482990 systemd[1]: Started systemd-userdbd.service. Feb 9 09:42:36.518789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:42:36.538705 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:42:36.541116 systemd-networkd[1046]: lo: Link UP Feb 9 09:42:36.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.541128 systemd-networkd[1046]: lo: Gained carrier Feb 9 09:42:36.541527 systemd-networkd[1046]: Enumeration completed Feb 9 09:42:36.541633 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:42:36.542593 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:42:36.543496 systemd[1]: Started systemd-networkd.service. Feb 9 09:42:36.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.548544 systemd-networkd[1046]: eth0: Link UP Feb 9 09:42:36.548553 systemd-networkd[1046]: eth0: Gained carrier Feb 9 09:42:36.553431 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:42:36.561395 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:42:36.587245 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:42:36.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.588081 systemd[1]: Reached target cryptsetup.target. Feb 9 09:42:36.589986 systemd[1]: Starting lvm2-activation.service... Feb 9 09:42:36.594350 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:42:36.622259 systemd[1]: Finished lvm2-activation.service. Feb 9 09:42:36.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.623041 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:42:36.623706 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:42:36.623741 systemd[1]: Reached target local-fs.target. Feb 9 09:42:36.624558 systemd[1]: Reached target machines.target. Feb 9 09:42:36.626618 systemd[1]: Starting ldconfig.service... Feb 9 09:42:36.627494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.627560 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:42:36.628640 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:42:36.630781 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:42:36.633047 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:42:36.634674 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.634772 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.636230 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:42:36.640831 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1072 (bootctl) Feb 9 09:42:36.641834 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:42:36.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.645664 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:42:36.652830 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:42:36.664111 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:42:36.673394 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:42:36.695603 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Feb 9 09:42:36.695603 systemd-fsck[1080]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 09:42:36.697932 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:42:36.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.700214 systemd[1]: Mounting boot.mount... Feb 9 09:42:36.849657 systemd[1]: Mounted boot.mount. Feb 9 09:42:36.921311 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:42:36.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:37.029015 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:42:37.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:37.031346 systemd[1]: Starting audit-rules.service... Feb 9 09:42:37.032884 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:42:37.036000 audit: BPF prog-id=27 op=LOAD Feb 9 09:42:37.034839 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:42:37.037186 systemd[1]: Starting systemd-resolved.service... Feb 9 09:42:37.039000 audit: BPF prog-id=28 op=LOAD Feb 9 09:42:37.040896 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:42:37.044230 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:42:37.045957 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:42:37.046736 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:42:37.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:37.048155 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:42:37.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:37.049497 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:42:37.052000 audit[1095]: SYSTEM_BOOT pid=1095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:42:37.055940 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:42:37.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:37.057750 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:42:37.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:37.090753 ldconfig[1071]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:42:37.091000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:42:37.092008 augenrules[1105]: No rules Feb 9 09:42:37.091000 audit[1105]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff4bf1ba0 a2=420 a3=0 items=0 ppid=1084 pid=1105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:37.091000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:42:37.096691 systemd[1]: Finished ldconfig.service. Feb 9 09:42:37.097593 systemd[1]: Finished audit-rules.service. Feb 9 09:42:37.099520 systemd[1]: Starting systemd-update-done.service... Feb 9 09:42:37.105544 systemd[1]: Finished systemd-update-done.service. Feb 9 09:42:37.106497 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:42:37.106864 systemd-timesyncd[1089]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 09:42:37.106917 systemd-timesyncd[1089]: Initial clock synchronization to Fri 2024-02-09 09:42:37.279291 UTC. Feb 9 09:42:37.107500 systemd[1]: Reached target time-set.target. Feb 9 09:42:37.111910 systemd-resolved[1088]: Positive Trust Anchors: Feb 9 09:42:37.111926 systemd-resolved[1088]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:42:37.111952 systemd-resolved[1088]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:42:37.129524 systemd-resolved[1088]: Defaulting to hostname 'linux'. Feb 9 09:42:37.130953 systemd[1]: Started systemd-resolved.service. Feb 9 09:42:37.132603 systemd[1]: Reached target network.target. Feb 9 09:42:37.133935 systemd[1]: Reached target nss-lookup.target. Feb 9 09:42:37.134555 systemd[1]: Reached target sysinit.target. Feb 9 09:42:37.135178 systemd[1]: Started motdgen.path. Feb 9 09:42:37.135735 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:42:37.136652 systemd[1]: Started logrotate.timer. Feb 9 09:42:37.137260 systemd[1]: Started mdadm.timer. Feb 9 09:42:37.137890 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:42:37.138640 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:42:37.138675 systemd[1]: Reached target paths.target. Feb 9 09:42:37.139324 systemd[1]: Reached target timers.target. Feb 9 09:42:37.140350 systemd[1]: Listening on dbus.socket. Feb 9 09:42:37.141985 systemd[1]: Starting docker.socket... Feb 9 09:42:37.144762 systemd[1]: Listening on sshd.socket. Feb 9 09:42:37.145398 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:42:37.145809 systemd[1]: Listening on docker.socket. Feb 9 09:42:37.146624 systemd[1]: Reached target sockets.target. Feb 9 09:42:37.147325 systemd[1]: Reached target basic.target. Feb 9 09:42:37.148056 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:42:37.148086 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:42:37.149033 systemd[1]: Starting containerd.service... Feb 9 09:42:37.150654 systemd[1]: Starting dbus.service... Feb 9 09:42:37.152165 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:42:37.153874 systemd[1]: Starting extend-filesystems.service... Feb 9 09:42:37.154685 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:42:37.155832 systemd[1]: Starting motdgen.service... Feb 9 09:42:37.160726 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:42:37.162463 systemd[1]: Starting prepare-critools.service... Feb 9 09:42:37.164383 systemd[1]: Starting prepare-helm.service... Feb 9 09:42:37.166156 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:42:37.168336 systemd[1]: Starting sshd-keygen.service... Feb 9 09:42:37.171202 systemd[1]: Starting systemd-logind.service... Feb 9 09:42:37.171949 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:42:37.172011 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:42:37.172471 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:42:37.174171 systemd[1]: Starting update-engine.service... Feb 9 09:42:37.176569 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:42:37.183764 jq[1116]: false Feb 9 09:42:37.179019 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:42:37.183886 jq[1136]: true Feb 9 09:42:37.179180 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:42:37.181428 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:42:37.181573 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:42:37.191311 extend-filesystems[1117]: Found vda Feb 9 09:42:37.191311 extend-filesystems[1117]: Found vda1 Feb 9 09:42:37.191311 extend-filesystems[1117]: Found vda2 Feb 9 09:42:37.191311 extend-filesystems[1117]: Found vda3 Feb 9 09:42:37.191311 extend-filesystems[1117]: Found usr Feb 9 09:42:37.191311 extend-filesystems[1117]: Found vda4 Feb 9 09:42:37.191311 extend-filesystems[1117]: Found vda6 Feb 9 09:42:37.191311 extend-filesystems[1117]: Found vda7 Feb 9 09:42:37.191311 extend-filesystems[1117]: Found vda9 Feb 9 09:42:37.191311 extend-filesystems[1117]: Checking size of /dev/vda9 Feb 9 09:42:37.211619 tar[1138]: ./ Feb 9 09:42:37.211619 tar[1138]: ./macvlan Feb 9 09:42:37.212001 tar[1140]: crictl Feb 9 09:42:37.212120 jq[1143]: true Feb 9 09:42:37.212217 tar[1141]: linux-arm64/helm Feb 9 09:42:37.213750 dbus-daemon[1115]: [system] SELinux support is enabled Feb 9 09:42:37.213888 systemd[1]: Started dbus.service. Feb 9 09:42:37.216267 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:42:37.216315 systemd[1]: Reached target system-config.target. Feb 9 09:42:37.217085 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:42:37.217100 systemd[1]: Reached target user-config.target. Feb 9 09:42:37.219414 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:42:37.219557 systemd[1]: Finished motdgen.service. Feb 9 09:42:37.246768 extend-filesystems[1117]: Resized partition /dev/vda9 Feb 9 09:42:37.255583 extend-filesystems[1168]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:42:37.270106 tar[1138]: ./static Feb 9 09:42:37.277387 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 09:42:37.284414 systemd-logind[1128]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:42:37.287523 systemd-logind[1128]: New seat seat0. Feb 9 09:42:37.295143 systemd[1]: Started systemd-logind.service. Feb 9 09:42:37.302707 tar[1138]: ./vlan Feb 9 09:42:37.307312 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 09:42:37.322947 update_engine[1131]: I0209 09:42:37.320474 1131 main.cc:92] Flatcar Update Engine starting Feb 9 09:42:37.326703 extend-filesystems[1168]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 09:42:37.326703 extend-filesystems[1168]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:42:37.326703 extend-filesystems[1168]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 09:42:37.330136 extend-filesystems[1117]: Resized filesystem in /dev/vda9 Feb 9 09:42:37.330033 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:42:37.330193 systemd[1]: Finished extend-filesystems.service. Feb 9 09:42:37.338105 bash[1171]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:42:37.338845 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:42:37.351424 systemd[1]: Started update-engine.service. Feb 9 09:42:37.351609 update_engine[1131]: I0209 09:42:37.351583 1131 update_check_scheduler.cc:74] Next update check in 8m39s Feb 9 09:42:37.354023 systemd[1]: Started locksmithd.service. Feb 9 09:42:37.363179 tar[1138]: ./portmap Feb 9 09:42:37.403604 tar[1138]: ./host-local Feb 9 09:42:37.408610 env[1142]: time="2024-02-09T09:42:37.408555320Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:42:37.451676 env[1142]: time="2024-02-09T09:42:37.451574520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:42:37.451839 env[1142]: time="2024-02-09T09:42:37.451811040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.453466 env[1142]: time="2024-02-09T09:42:37.453413880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:42:37.453466 env[1142]: time="2024-02-09T09:42:37.453450040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.453689 env[1142]: time="2024-02-09T09:42:37.453656840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:42:37.453689 env[1142]: time="2024-02-09T09:42:37.453683440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.453765 env[1142]: time="2024-02-09T09:42:37.453698040Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:42:37.453765 env[1142]: time="2024-02-09T09:42:37.453707800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.453805 env[1142]: time="2024-02-09T09:42:37.453795120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.454078 env[1142]: time="2024-02-09T09:42:37.454051760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.454212 env[1142]: time="2024-02-09T09:42:37.454187840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:42:37.454212 env[1142]: time="2024-02-09T09:42:37.454209680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:42:37.454292 env[1142]: time="2024-02-09T09:42:37.454265040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:42:37.454292 env[1142]: time="2024-02-09T09:42:37.454296400Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:42:37.456800 tar[1138]: ./vrf Feb 9 09:42:37.457387 env[1142]: time="2024-02-09T09:42:37.457358840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:42:37.457436 env[1142]: time="2024-02-09T09:42:37.457391760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:42:37.457436 env[1142]: time="2024-02-09T09:42:37.457404760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:42:37.457436 env[1142]: time="2024-02-09T09:42:37.457429000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.457492 env[1142]: time="2024-02-09T09:42:37.457443600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.457492 env[1142]: time="2024-02-09T09:42:37.457456880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.457492 env[1142]: time="2024-02-09T09:42:37.457469840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.457913 env[1142]: time="2024-02-09T09:42:37.457886160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.457951 env[1142]: time="2024-02-09T09:42:37.457914880Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.457951 env[1142]: time="2024-02-09T09:42:37.457928640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.457951 env[1142]: time="2024-02-09T09:42:37.457941040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.458012 env[1142]: time="2024-02-09T09:42:37.457953480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:42:37.458081 env[1142]: time="2024-02-09T09:42:37.458059080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:42:37.458158 env[1142]: time="2024-02-09T09:42:37.458138400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:42:37.458400 env[1142]: time="2024-02-09T09:42:37.458382160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.458433 env[1142]: time="2024-02-09T09:42:37.458410520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458433 env[1142]: time="2024-02-09T09:42:37.458424720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:42:37.458543 env[1142]: time="2024-02-09T09:42:37.458526560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458575 env[1142]: time="2024-02-09T09:42:37.458542240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458575 env[1142]: time="2024-02-09T09:42:37.458555680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458575 env[1142]: time="2024-02-09T09:42:37.458567360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458636 env[1142]: time="2024-02-09T09:42:37.458579040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458636 env[1142]: time="2024-02-09T09:42:37.458609400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458636 env[1142]: time="2024-02-09T09:42:37.458624960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458694 env[1142]: time="2024-02-09T09:42:37.458636760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458694 env[1142]: time="2024-02-09T09:42:37.458652320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:42:37.458814 env[1142]: time="2024-02-09T09:42:37.458789480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458847 env[1142]: time="2024-02-09T09:42:37.458813680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458847 env[1142]: time="2024-02-09T09:42:37.458826080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.458847 env[1142]: time="2024-02-09T09:42:37.458837560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:42:37.458911 env[1142]: time="2024-02-09T09:42:37.458851760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:42:37.458911 env[1142]: time="2024-02-09T09:42:37.458862480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:42:37.458911 env[1142]: time="2024-02-09T09:42:37.458878680Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:42:37.458971 env[1142]: time="2024-02-09T09:42:37.458909720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.459151 env[1142]: time="2024-02-09T09:42:37.459099280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.459157600Z" level=info msg="Connect containerd service" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.459187240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.460029800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.460357920Z" level=info msg="Start subscribing containerd event" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.460468920Z" level=info msg="Start recovering state" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.460531320Z" level=info msg="Start event monitor" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.460550640Z" level=info msg="Start snapshots syncer" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.460559720Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.460567640Z" level=info msg="Start streaming server" Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.461389560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:42:37.461602 env[1142]: time="2024-02-09T09:42:37.461511520Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:42:37.461702 systemd[1]: Started containerd.service. Feb 9 09:42:37.464940 env[1142]: time="2024-02-09T09:42:37.464841040Z" level=info msg="containerd successfully booted in 0.062917s" Feb 9 09:42:37.512063 tar[1138]: ./bridge Feb 9 09:42:37.575071 tar[1138]: ./tuning Feb 9 09:42:37.627539 tar[1138]: ./firewall Feb 9 09:42:37.667151 tar[1138]: ./host-device Feb 9 09:42:37.698938 tar[1138]: ./sbr Feb 9 09:42:37.709398 systemd[1]: Finished prepare-critools.service. Feb 9 09:42:37.716173 locksmithd[1174]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:42:37.727543 tar[1138]: ./loopback Feb 9 09:42:37.755073 tar[1138]: ./dhcp Feb 9 09:42:37.768350 tar[1141]: linux-arm64/LICENSE Feb 9 09:42:37.768419 tar[1141]: linux-arm64/README.md Feb 9 09:42:37.772336 systemd[1]: Finished prepare-helm.service. Feb 9 09:42:37.826856 tar[1138]: ./ptp Feb 9 09:42:37.854893 tar[1138]: ./ipvlan Feb 9 09:42:37.882168 tar[1138]: ./bandwidth Feb 9 09:42:37.918181 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:42:38.318746 systemd-networkd[1046]: eth0: Gained IPv6LL Feb 9 09:42:40.126524 sshd_keygen[1139]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:42:40.143648 systemd[1]: Finished sshd-keygen.service. Feb 9 09:42:40.145819 systemd[1]: Starting issuegen.service... Feb 9 09:42:40.150155 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:42:40.150426 systemd[1]: Finished issuegen.service. Feb 9 09:42:40.152405 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:42:40.158226 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:42:40.160270 systemd[1]: Started getty@tty1.service. Feb 9 09:42:40.162093 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:42:40.163047 systemd[1]: Reached target getty.target. Feb 9 09:42:40.163767 systemd[1]: Reached target multi-user.target. Feb 9 09:42:40.165742 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:42:40.171625 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:42:40.171764 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:42:40.172736 systemd[1]: Startup finished in 607ms (kernel) + 15.450s (initrd) + 6.117s (userspace) = 22.175s. Feb 9 09:42:41.775844 systemd[1]: Created slice system-sshd.slice. Feb 9 09:42:41.776967 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:37086.service. Feb 9 09:42:41.831887 sshd[1204]: Accepted publickey for core from 10.0.0.1 port 37086 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:41.834326 sshd[1204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:41.843085 systemd-logind[1128]: New session 1 of user core. Feb 9 09:42:41.844017 systemd[1]: Created slice user-500.slice. Feb 9 09:42:41.845088 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:42:41.852835 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:42:41.854124 systemd[1]: Starting user@500.service... Feb 9 09:42:41.856754 (systemd)[1207]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:41.920735 systemd[1207]: Queued start job for default target default.target. Feb 9 09:42:41.921251 systemd[1207]: Reached target paths.target. Feb 9 09:42:41.921271 systemd[1207]: Reached target sockets.target. Feb 9 09:42:41.921282 systemd[1207]: Reached target timers.target. Feb 9 09:42:41.921315 systemd[1207]: Reached target basic.target. Feb 9 09:42:41.921373 systemd[1207]: Reached target default.target. Feb 9 09:42:41.921399 systemd[1207]: Startup finished in 59ms. Feb 9 09:42:41.921456 systemd[1]: Started user@500.service. Feb 9 09:42:41.922762 systemd[1]: Started session-1.scope. Feb 9 09:42:41.974486 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:37096.service. Feb 9 09:42:42.026071 sshd[1216]: Accepted publickey for core from 10.0.0.1 port 37096 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.027550 sshd[1216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.031704 systemd[1]: Started session-2.scope. Feb 9 09:42:42.032026 systemd-logind[1128]: New session 2 of user core. Feb 9 09:42:42.089336 sshd[1216]: pam_unix(sshd:session): session closed for user core Feb 9 09:42:42.091749 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:37096.service: Deactivated successfully. Feb 9 09:42:42.092337 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:42:42.092824 systemd-logind[1128]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:42:42.094108 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:37106.service. Feb 9 09:42:42.094694 systemd-logind[1128]: Removed session 2. Feb 9 09:42:42.134977 sshd[1222]: Accepted publickey for core from 10.0.0.1 port 37106 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.136092 sshd[1222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.139248 systemd-logind[1128]: New session 3 of user core. Feb 9 09:42:42.140054 systemd[1]: Started session-3.scope. Feb 9 09:42:42.188607 sshd[1222]: pam_unix(sshd:session): session closed for user core Feb 9 09:42:42.191125 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:37106.service: Deactivated successfully. Feb 9 09:42:42.191786 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:42:42.192382 systemd-logind[1128]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:42:42.193650 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:37120.service. Feb 9 09:42:42.194342 systemd-logind[1128]: Removed session 3. Feb 9 09:42:42.233785 sshd[1228]: Accepted publickey for core from 10.0.0.1 port 37120 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.234951 sshd[1228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.237960 systemd-logind[1128]: New session 4 of user core. Feb 9 09:42:42.238871 systemd[1]: Started session-4.scope. Feb 9 09:42:42.292240 sshd[1228]: pam_unix(sshd:session): session closed for user core Feb 9 09:42:42.295985 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:37128.service. Feb 9 09:42:42.296592 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:37120.service: Deactivated successfully. Feb 9 09:42:42.297275 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:42:42.297901 systemd-logind[1128]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:42:42.298689 systemd-logind[1128]: Removed session 4. Feb 9 09:42:42.336367 sshd[1233]: Accepted publickey for core from 10.0.0.1 port 37128 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.337474 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.340558 systemd-logind[1128]: New session 5 of user core. Feb 9 09:42:42.341468 systemd[1]: Started session-5.scope. Feb 9 09:42:42.398879 sudo[1237]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:42:42.399391 sudo[1237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:42:43.152288 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:42:43.158924 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:42:43.159206 systemd[1]: Reached target network-online.target. Feb 9 09:42:43.160532 systemd[1]: Starting docker.service... Feb 9 09:42:43.279048 env[1256]: time="2024-02-09T09:42:43.278988361Z" level=info msg="Starting up" Feb 9 09:42:43.280741 env[1256]: time="2024-02-09T09:42:43.280706476Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:42:43.280741 env[1256]: time="2024-02-09T09:42:43.280732116Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:42:43.280830 env[1256]: time="2024-02-09T09:42:43.280751933Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:42:43.280830 env[1256]: time="2024-02-09T09:42:43.280821655Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:42:43.283047 env[1256]: time="2024-02-09T09:42:43.283020387Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:42:43.283047 env[1256]: time="2024-02-09T09:42:43.283045825Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:42:43.283144 env[1256]: time="2024-02-09T09:42:43.283065804Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:42:43.283144 env[1256]: time="2024-02-09T09:42:43.283075712Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:42:43.286767 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1374704958-merged.mount: Deactivated successfully. Feb 9 09:42:43.503885 env[1256]: time="2024-02-09T09:42:43.503423336Z" level=info msg="Loading containers: start." Feb 9 09:42:43.603349 kernel: Initializing XFRM netlink socket Feb 9 09:42:43.627823 env[1256]: time="2024-02-09T09:42:43.627767273Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:42:43.683101 systemd-networkd[1046]: docker0: Link UP Feb 9 09:42:43.693551 env[1256]: time="2024-02-09T09:42:43.693511962Z" level=info msg="Loading containers: done." Feb 9 09:42:43.712753 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2015430168-merged.mount: Deactivated successfully. Feb 9 09:42:43.715511 env[1256]: time="2024-02-09T09:42:43.715456045Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:42:43.715655 env[1256]: time="2024-02-09T09:42:43.715636054Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:42:43.715772 env[1256]: time="2024-02-09T09:42:43.715738495Z" level=info msg="Daemon has completed initialization" Feb 9 09:42:43.733722 systemd[1]: Started docker.service. Feb 9 09:42:43.741246 env[1256]: time="2024-02-09T09:42:43.741094095Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:42:43.760234 systemd[1]: Reloading. Feb 9 09:42:43.805961 /usr/lib/systemd/system-generators/torcx-generator[1398]: time="2024-02-09T09:42:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:42:43.805989 /usr/lib/systemd/system-generators/torcx-generator[1398]: time="2024-02-09T09:42:43Z" level=info msg="torcx already run" Feb 9 09:42:43.859093 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:42:43.859113 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:42:43.874389 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:42:43.935708 systemd[1]: Started kubelet.service. Feb 9 09:42:44.149416 kubelet[1435]: E0209 09:42:44.149260 1435 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:42:44.151556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:42:44.151687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:42:44.426925 env[1142]: time="2024-02-09T09:42:44.426803096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:42:45.221536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708468486.mount: Deactivated successfully. Feb 9 09:42:46.780615 env[1142]: time="2024-02-09T09:42:46.780548917Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:46.781951 env[1142]: time="2024-02-09T09:42:46.781915528Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:46.783568 env[1142]: time="2024-02-09T09:42:46.783535804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:46.785155 env[1142]: time="2024-02-09T09:42:46.785120822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:46.786064 env[1142]: time="2024-02-09T09:42:46.786025677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:42:46.796030 env[1142]: time="2024-02-09T09:42:46.795996973Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:42:48.875099 env[1142]: time="2024-02-09T09:42:48.875048356Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:48.876957 env[1142]: time="2024-02-09T09:42:48.876923694Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:48.878784 env[1142]: time="2024-02-09T09:42:48.878755104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:48.880557 env[1142]: time="2024-02-09T09:42:48.880533576Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:48.882120 env[1142]: time="2024-02-09T09:42:48.882081709Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:42:48.891291 env[1142]: time="2024-02-09T09:42:48.891259638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:42:50.269808 env[1142]: time="2024-02-09T09:42:50.269757559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:50.271297 env[1142]: time="2024-02-09T09:42:50.271258449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:50.273992 env[1142]: time="2024-02-09T09:42:50.273962784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:50.277185 env[1142]: time="2024-02-09T09:42:50.277143900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:50.277924 env[1142]: time="2024-02-09T09:42:50.277886551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:42:50.286825 env[1142]: time="2024-02-09T09:42:50.286799936Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:42:51.330365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946205833.mount: Deactivated successfully. Feb 9 09:42:52.789644 env[1142]: time="2024-02-09T09:42:52.789597252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:52.791674 env[1142]: time="2024-02-09T09:42:52.791644998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:52.792961 env[1142]: time="2024-02-09T09:42:52.792924519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:52.794492 env[1142]: time="2024-02-09T09:42:52.794469078Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:52.795001 env[1142]: time="2024-02-09T09:42:52.794972627Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:42:52.805486 env[1142]: time="2024-02-09T09:42:52.805419209Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:42:53.288014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570958470.mount: Deactivated successfully. Feb 9 09:42:53.292450 env[1142]: time="2024-02-09T09:42:53.292408989Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:53.293980 env[1142]: time="2024-02-09T09:42:53.293944082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:53.295359 env[1142]: time="2024-02-09T09:42:53.295317346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:53.297023 env[1142]: time="2024-02-09T09:42:53.296994853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:53.297534 env[1142]: time="2024-02-09T09:42:53.297504732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:42:53.307208 env[1142]: time="2024-02-09T09:42:53.307174465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:42:54.092021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167425764.mount: Deactivated successfully. Feb 9 09:42:54.290951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:42:54.291129 systemd[1]: Stopped kubelet.service. Feb 9 09:42:54.292674 systemd[1]: Started kubelet.service. Feb 9 09:42:54.332633 kubelet[1492]: E0209 09:42:54.332565 1492 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:42:54.336039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:42:54.336168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:42:57.845972 env[1142]: time="2024-02-09T09:42:57.845925377Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:57.847645 env[1142]: time="2024-02-09T09:42:57.847613932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:57.850521 env[1142]: time="2024-02-09T09:42:57.850476927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:57.853292 env[1142]: time="2024-02-09T09:42:57.853253654Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:57.853893 env[1142]: time="2024-02-09T09:42:57.853863012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:42:57.862271 env[1142]: time="2024-02-09T09:42:57.862238593Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:42:58.628131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933804760.mount: Deactivated successfully. Feb 9 09:43:00.008357 env[1142]: time="2024-02-09T09:43:00.008304813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:00.010109 env[1142]: time="2024-02-09T09:43:00.010068946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:00.012387 env[1142]: time="2024-02-09T09:43:00.012360241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:00.013470 env[1142]: time="2024-02-09T09:43:00.013445239Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:00.014018 env[1142]: time="2024-02-09T09:43:00.013987778Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:43:04.245856 systemd[1]: Stopped kubelet.service. Feb 9 09:43:04.260530 systemd[1]: Reloading. Feb 9 09:43:04.308245 /usr/lib/systemd/system-generators/torcx-generator[1602]: time="2024-02-09T09:43:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:43:04.308613 /usr/lib/systemd/system-generators/torcx-generator[1602]: time="2024-02-09T09:43:04Z" level=info msg="torcx already run" Feb 9 09:43:04.366249 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:43:04.366268 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:43:04.381727 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:43:04.449662 systemd[1]: Started kubelet.service. Feb 9 09:43:04.493594 kubelet[1638]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:04.493594 kubelet[1638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:04.493925 kubelet[1638]: I0209 09:43:04.493684 1638 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:43:04.494850 kubelet[1638]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:04.494850 kubelet[1638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:05.294133 kubelet[1638]: I0209 09:43:05.294092 1638 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:43:05.294133 kubelet[1638]: I0209 09:43:05.294120 1638 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:43:05.294378 kubelet[1638]: I0209 09:43:05.294363 1638 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:43:05.299127 kubelet[1638]: I0209 09:43:05.299106 1638 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:43:05.299421 kubelet[1638]: E0209 09:43:05.299342 1638 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.301277 kubelet[1638]: W0209 09:43:05.301252 1638 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:43:05.302054 kubelet[1638]: I0209 09:43:05.302033 1638 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:43:05.302579 kubelet[1638]: I0209 09:43:05.302561 1638 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:43:05.302642 kubelet[1638]: I0209 09:43:05.302629 1638 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:43:05.302716 kubelet[1638]: I0209 09:43:05.302706 1638 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:43:05.302743 kubelet[1638]: I0209 09:43:05.302717 1638 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:43:05.302904 kubelet[1638]: I0209 09:43:05.302878 1638 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:05.308331 kubelet[1638]: I0209 09:43:05.308203 1638 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:43:05.308331 kubelet[1638]: I0209 09:43:05.308224 1638 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:43:05.308439 kubelet[1638]: I0209 09:43:05.308417 1638 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:43:05.308439 kubelet[1638]: I0209 09:43:05.308429 1638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:43:05.309431 kubelet[1638]: W0209 09:43:05.309328 1638 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.309431 kubelet[1638]: E0209 09:43:05.309379 1638 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.309551 kubelet[1638]: I0209 09:43:05.309535 1638 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:43:05.310469 kubelet[1638]: W0209 09:43:05.310453 1638 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:43:05.312947 kubelet[1638]: I0209 09:43:05.312926 1638 server.go:1186] "Started kubelet" Feb 9 09:43:05.313194 kubelet[1638]: W0209 09:43:05.309242 1638 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.313194 kubelet[1638]: E0209 09:43:05.313191 1638 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.313269 kubelet[1638]: I0209 09:43:05.313216 1638 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:43:05.313839 kubelet[1638]: I0209 09:43:05.313812 1638 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:43:05.313954 kubelet[1638]: E0209 09:43:05.313923 1638 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:43:05.313954 kubelet[1638]: E0209 09:43:05.313952 1638 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:43:05.314210 kubelet[1638]: E0209 09:43:05.314113 1638 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288acb93e1a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 5, 312903592, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 5, 312903592, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.12:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.12:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:43:05.316114 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:43:05.316243 kubelet[1638]: I0209 09:43:05.316216 1638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:43:05.316557 kubelet[1638]: E0209 09:43:05.316498 1638 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:43:05.317134 kubelet[1638]: I0209 09:43:05.317107 1638 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:43:05.317231 kubelet[1638]: I0209 09:43:05.317122 1638 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:43:05.317465 kubelet[1638]: W0209 09:43:05.317430 1638 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.317758 kubelet[1638]: E0209 09:43:05.317739 1638 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.317860 kubelet[1638]: E0209 09:43:05.317823 1638 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.334268 kubelet[1638]: I0209 09:43:05.334219 1638 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:43:05.334268 kubelet[1638]: I0209 09:43:05.334250 1638 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:43:05.334268 kubelet[1638]: I0209 09:43:05.334265 1638 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:05.336994 kubelet[1638]: I0209 09:43:05.336967 1638 policy_none.go:49] "None policy: Start" Feb 9 09:43:05.337789 kubelet[1638]: I0209 09:43:05.337763 1638 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:43:05.337856 kubelet[1638]: I0209 09:43:05.337798 1638 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:43:05.342318 systemd[1]: Created slice kubepods.slice. Feb 9 09:43:05.346140 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:43:05.348492 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:43:05.355958 kubelet[1638]: I0209 09:43:05.355932 1638 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:43:05.356257 kubelet[1638]: I0209 09:43:05.356240 1638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:43:05.357457 kubelet[1638]: E0209 09:43:05.357301 1638 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 09:43:05.357457 kubelet[1638]: I0209 09:43:05.357354 1638 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:43:05.377801 kubelet[1638]: I0209 09:43:05.377770 1638 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:43:05.377801 kubelet[1638]: I0209 09:43:05.377802 1638 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:43:05.377933 kubelet[1638]: I0209 09:43:05.377820 1638 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:43:05.377933 kubelet[1638]: E0209 09:43:05.377867 1638 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:43:05.378785 kubelet[1638]: W0209 09:43:05.378740 1638 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.378878 kubelet[1638]: E0209 09:43:05.378794 1638 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.418658 kubelet[1638]: I0209 09:43:05.418630 1638 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:05.419088 kubelet[1638]: E0209 09:43:05.419070 1638 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Feb 9 09:43:05.478240 kubelet[1638]: I0209 09:43:05.478191 1638 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:05.479342 kubelet[1638]: I0209 09:43:05.479322 1638 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:05.480217 kubelet[1638]: I0209 09:43:05.480186 1638 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:05.480662 kubelet[1638]: I0209 09:43:05.480559 1638 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.12:6443: connect: connection refused" Feb 9 09:43:05.482195 kubelet[1638]: I0209 09:43:05.482169 1638 status_manager.go:698] "Failed to get status for pod" podUID=23075114f7af2852f5e6d80c32dc41c7 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.12:6443: connect: connection refused" Feb 9 09:43:05.482813 kubelet[1638]: I0209 09:43:05.482523 1638 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.12:6443: connect: connection refused" Feb 9 09:43:05.484978 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 9 09:43:05.499085 systemd[1]: Created slice kubepods-burstable-pod23075114f7af2852f5e6d80c32dc41c7.slice. Feb 9 09:43:05.517204 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 9 09:43:05.518669 kubelet[1638]: E0209 09:43:05.518636 1638 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:05.618238 kubelet[1638]: I0209 09:43:05.618136 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:05.618238 kubelet[1638]: I0209 09:43:05.618180 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:43:05.618238 kubelet[1638]: I0209 09:43:05.618204 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23075114f7af2852f5e6d80c32dc41c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23075114f7af2852f5e6d80c32dc41c7\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:05.618238 kubelet[1638]: I0209 09:43:05.618226 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:05.618471 kubelet[1638]: I0209 09:43:05.618249 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:05.618471 kubelet[1638]: I0209 09:43:05.618271 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:05.618471 kubelet[1638]: I0209 09:43:05.618308 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23075114f7af2852f5e6d80c32dc41c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23075114f7af2852f5e6d80c32dc41c7\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:05.618471 kubelet[1638]: I0209 09:43:05.618336 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23075114f7af2852f5e6d80c32dc41c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23075114f7af2852f5e6d80c32dc41c7\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:05.618471 kubelet[1638]: I0209 09:43:05.618359 1638 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:05.621028 kubelet[1638]: I0209 09:43:05.621007 1638 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:05.621484 kubelet[1638]: E0209 09:43:05.621467 1638 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Feb 9 09:43:05.798161 kubelet[1638]: E0209 09:43:05.798122 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:05.798821 env[1142]: time="2024-02-09T09:43:05.798727376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:05.816093 kubelet[1638]: E0209 09:43:05.816072 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:05.816536 env[1142]: time="2024-02-09T09:43:05.816501706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23075114f7af2852f5e6d80c32dc41c7,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:05.819357 kubelet[1638]: E0209 09:43:05.819331 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:05.819714 env[1142]: time="2024-02-09T09:43:05.819683330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:05.920316 kubelet[1638]: E0209 09:43:05.919967 1638 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:06.024013 kubelet[1638]: I0209 09:43:06.023947 1638 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:06.024329 kubelet[1638]: E0209 09:43:06.024311 1638 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Feb 9 09:43:06.235321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167185121.mount: Deactivated successfully. Feb 9 09:43:06.241806 env[1142]: time="2024-02-09T09:43:06.241763656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.242774 env[1142]: time="2024-02-09T09:43:06.242745799Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.243638 env[1142]: time="2024-02-09T09:43:06.243609802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.245388 env[1142]: time="2024-02-09T09:43:06.245348213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.246768 env[1142]: time="2024-02-09T09:43:06.246732443Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.249455 env[1142]: time="2024-02-09T09:43:06.249426583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.250202 env[1142]: time="2024-02-09T09:43:06.250166603Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.253231 env[1142]: time="2024-02-09T09:43:06.253194274Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.256400 env[1142]: time="2024-02-09T09:43:06.256362218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.257417 env[1142]: time="2024-02-09T09:43:06.257388024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.258336 env[1142]: time="2024-02-09T09:43:06.258307295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.259252 env[1142]: time="2024-02-09T09:43:06.259226846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:06.297323 env[1142]: time="2024-02-09T09:43:06.297123789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:06.297323 env[1142]: time="2024-02-09T09:43:06.297167612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:06.297323 env[1142]: time="2024-02-09T09:43:06.297178057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:06.297519 env[1142]: time="2024-02-09T09:43:06.297374598Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/307017758854fc53063c273b8fd6b3c4a5f65744bb9ed1ab934345fdf0f2ba02 pid=1729 runtime=io.containerd.runc.v2 Feb 9 09:43:06.298128 env[1142]: time="2024-02-09T09:43:06.297964060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:06.298128 env[1142]: time="2024-02-09T09:43:06.298002079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:06.298128 env[1142]: time="2024-02-09T09:43:06.298017287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:06.298269 env[1142]: time="2024-02-09T09:43:06.298138669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:06.298269 env[1142]: time="2024-02-09T09:43:06.298167884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:06.298269 env[1142]: time="2024-02-09T09:43:06.298177809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:06.298416 env[1142]: time="2024-02-09T09:43:06.298330848Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10be1797c66df25c0118867fa13c57a5680c6487ac9bb6f4f17c06b45dfe4318 pid=1730 runtime=io.containerd.runc.v2 Feb 9 09:43:06.298463 env[1142]: time="2024-02-09T09:43:06.298430659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f74fc36e49c92e8f6a5e3b277c5025ecc6a32c1cce8ba4e3613b6c59a1ed3698 pid=1740 runtime=io.containerd.runc.v2 Feb 9 09:43:06.312759 systemd[1]: Started cri-containerd-10be1797c66df25c0118867fa13c57a5680c6487ac9bb6f4f17c06b45dfe4318.scope. Feb 9 09:43:06.317062 systemd[1]: Started cri-containerd-307017758854fc53063c273b8fd6b3c4a5f65744bb9ed1ab934345fdf0f2ba02.scope. Feb 9 09:43:06.333450 systemd[1]: Started cri-containerd-f74fc36e49c92e8f6a5e3b277c5025ecc6a32c1cce8ba4e3613b6c59a1ed3698.scope. Feb 9 09:43:06.337160 kubelet[1638]: W0209 09:43:06.337058 1638 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:06.337160 kubelet[1638]: E0209 09:43:06.337144 1638 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:06.370057 env[1142]: time="2024-02-09T09:43:06.370009424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23075114f7af2852f5e6d80c32dc41c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"10be1797c66df25c0118867fa13c57a5680c6487ac9bb6f4f17c06b45dfe4318\"" Feb 9 09:43:06.370840 kubelet[1638]: E0209 09:43:06.370814 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:06.376127 env[1142]: time="2024-02-09T09:43:06.376038234Z" level=info msg="CreateContainer within sandbox \"10be1797c66df25c0118867fa13c57a5680c6487ac9bb6f4f17c06b45dfe4318\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:43:06.377672 env[1142]: time="2024-02-09T09:43:06.377572461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f74fc36e49c92e8f6a5e3b277c5025ecc6a32c1cce8ba4e3613b6c59a1ed3698\"" Feb 9 09:43:06.378291 kubelet[1638]: E0209 09:43:06.378266 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:06.378583 env[1142]: time="2024-02-09T09:43:06.378555564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"307017758854fc53063c273b8fd6b3c4a5f65744bb9ed1ab934345fdf0f2ba02\"" Feb 9 09:43:06.379200 kubelet[1638]: E0209 09:43:06.379096 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:06.380620 env[1142]: time="2024-02-09T09:43:06.380579562Z" level=info msg="CreateContainer within sandbox \"f74fc36e49c92e8f6a5e3b277c5025ecc6a32c1cce8ba4e3613b6c59a1ed3698\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:43:06.380843 env[1142]: time="2024-02-09T09:43:06.380580762Z" level=info msg="CreateContainer within sandbox \"307017758854fc53063c273b8fd6b3c4a5f65744bb9ed1ab934345fdf0f2ba02\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:43:06.396126 env[1142]: time="2024-02-09T09:43:06.396067620Z" level=info msg="CreateContainer within sandbox \"10be1797c66df25c0118867fa13c57a5680c6487ac9bb6f4f17c06b45dfe4318\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12afe1ef9feac99d8bd4df959b795ef277287012652bdb83e3c05f97a3a51cb9\"" Feb 9 09:43:06.396797 env[1142]: time="2024-02-09T09:43:06.396753251Z" level=info msg="StartContainer for \"12afe1ef9feac99d8bd4df959b795ef277287012652bdb83e3c05f97a3a51cb9\"" Feb 9 09:43:06.401682 env[1142]: time="2024-02-09T09:43:06.401632312Z" level=info msg="CreateContainer within sandbox \"307017758854fc53063c273b8fd6b3c4a5f65744bb9ed1ab934345fdf0f2ba02\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7ad6b1683ea6818d9df76ede85a28bf63c2b1fec62ebbc8e1f1a9b2da1fafb7\"" Feb 9 09:43:06.402251 env[1142]: time="2024-02-09T09:43:06.402158742Z" level=info msg="StartContainer for \"a7ad6b1683ea6818d9df76ede85a28bf63c2b1fec62ebbc8e1f1a9b2da1fafb7\"" Feb 9 09:43:06.403055 env[1142]: time="2024-02-09T09:43:06.403001614Z" level=info msg="CreateContainer within sandbox \"f74fc36e49c92e8f6a5e3b277c5025ecc6a32c1cce8ba4e3613b6c59a1ed3698\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c7298b6f2edbe66c733d38196a34714a8960c29bb80e9f451206ca7a88aafad\"" Feb 9 09:43:06.403470 env[1142]: time="2024-02-09T09:43:06.403421949Z" level=info msg="StartContainer for \"5c7298b6f2edbe66c733d38196a34714a8960c29bb80e9f451206ca7a88aafad\"" Feb 9 09:43:06.413452 systemd[1]: Started cri-containerd-12afe1ef9feac99d8bd4df959b795ef277287012652bdb83e3c05f97a3a51cb9.scope. Feb 9 09:43:06.423241 systemd[1]: Started cri-containerd-5c7298b6f2edbe66c733d38196a34714a8960c29bb80e9f451206ca7a88aafad.scope. Feb 9 09:43:06.439783 systemd[1]: Started cri-containerd-a7ad6b1683ea6818d9df76ede85a28bf63c2b1fec62ebbc8e1f1a9b2da1fafb7.scope. Feb 9 09:43:06.491618 env[1142]: time="2024-02-09T09:43:06.491514778Z" level=info msg="StartContainer for \"12afe1ef9feac99d8bd4df959b795ef277287012652bdb83e3c05f97a3a51cb9\" returns successfully" Feb 9 09:43:06.505196 env[1142]: time="2024-02-09T09:43:06.505069725Z" level=info msg="StartContainer for \"a7ad6b1683ea6818d9df76ede85a28bf63c2b1fec62ebbc8e1f1a9b2da1fafb7\" returns successfully" Feb 9 09:43:06.513668 env[1142]: time="2024-02-09T09:43:06.512848352Z" level=info msg="StartContainer for \"5c7298b6f2edbe66c733d38196a34714a8960c29bb80e9f451206ca7a88aafad\" returns successfully" Feb 9 09:43:06.547110 kubelet[1638]: W0209 09:43:06.546777 1638 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:06.547110 kubelet[1638]: E0209 09:43:06.546834 1638 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 9 09:43:06.826476 kubelet[1638]: I0209 09:43:06.825944 1638 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:07.387176 kubelet[1638]: E0209 09:43:07.387143 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:07.389941 kubelet[1638]: E0209 09:43:07.389918 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:07.391947 kubelet[1638]: E0209 09:43:07.391925 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:08.393872 kubelet[1638]: E0209 09:43:08.393846 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:08.394617 kubelet[1638]: E0209 09:43:08.394599 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:08.395144 kubelet[1638]: E0209 09:43:08.395128 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:08.559443 kubelet[1638]: E0209 09:43:08.559411 1638 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 09:43:08.626558 kubelet[1638]: I0209 09:43:08.626521 1638 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:43:08.637397 kubelet[1638]: E0209 09:43:08.637372 1638 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:43:08.738495 kubelet[1638]: E0209 09:43:08.738375 1638 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:43:08.838572 kubelet[1638]: E0209 09:43:08.838524 1638 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:43:08.939554 kubelet[1638]: E0209 09:43:08.939508 1638 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:43:09.040481 kubelet[1638]: E0209 09:43:09.040415 1638 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:43:09.310726 kubelet[1638]: I0209 09:43:09.310629 1638 apiserver.go:52] "Watching apiserver" Feb 9 09:43:09.518117 kubelet[1638]: I0209 09:43:09.518084 1638 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:43:09.556681 kubelet[1638]: I0209 09:43:09.556649 1638 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:43:09.716834 kubelet[1638]: E0209 09:43:09.716701 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:10.395683 kubelet[1638]: E0209 09:43:10.395650 1638 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.036623 systemd[1]: Reloading. Feb 9 09:43:11.088691 /usr/lib/systemd/system-generators/torcx-generator[1972]: time="2024-02-09T09:43:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:43:11.088721 /usr/lib/systemd/system-generators/torcx-generator[1972]: time="2024-02-09T09:43:11Z" level=info msg="torcx already run" Feb 9 09:43:11.155025 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:43:11.155042 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:43:11.173551 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:43:11.266181 systemd[1]: Stopping kubelet.service... Feb 9 09:43:11.277762 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:43:11.278033 systemd[1]: Stopped kubelet.service. Feb 9 09:43:11.278086 systemd[1]: kubelet.service: Consumed 1.156s CPU time. Feb 9 09:43:11.279954 systemd[1]: Started kubelet.service. Feb 9 09:43:11.342970 kubelet[2010]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:11.342970 kubelet[2010]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:11.343336 kubelet[2010]: I0209 09:43:11.343004 2010 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:43:11.344368 kubelet[2010]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:11.344368 kubelet[2010]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:11.347415 kubelet[2010]: I0209 09:43:11.347390 2010 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:43:11.347514 kubelet[2010]: I0209 09:43:11.347502 2010 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:43:11.347855 kubelet[2010]: I0209 09:43:11.347835 2010 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:43:11.349757 kubelet[2010]: I0209 09:43:11.349731 2010 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:43:11.351255 kubelet[2010]: I0209 09:43:11.351231 2010 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:43:11.352960 kubelet[2010]: W0209 09:43:11.352946 2010 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:43:11.353837 kubelet[2010]: I0209 09:43:11.353822 2010 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:43:11.354012 kubelet[2010]: I0209 09:43:11.354002 2010 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:43:11.354087 kubelet[2010]: I0209 09:43:11.354076 2010 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:43:11.354161 kubelet[2010]: I0209 09:43:11.354098 2010 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:43:11.354161 kubelet[2010]: I0209 09:43:11.354110 2010 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:43:11.354161 kubelet[2010]: I0209 09:43:11.354138 2010 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:11.356963 kubelet[2010]: I0209 09:43:11.356942 2010 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:43:11.357077 kubelet[2010]: I0209 09:43:11.357064 2010 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:43:11.357178 kubelet[2010]: I0209 09:43:11.357164 2010 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:43:11.357246 kubelet[2010]: I0209 09:43:11.357236 2010 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:43:11.358009 kubelet[2010]: I0209 09:43:11.357987 2010 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:43:11.358542 kubelet[2010]: I0209 09:43:11.358517 2010 server.go:1186] "Started kubelet" Feb 9 09:43:11.360255 kubelet[2010]: I0209 09:43:11.360234 2010 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:43:11.362214 kubelet[2010]: E0209 09:43:11.362180 2010 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:43:11.362214 kubelet[2010]: E0209 09:43:11.362206 2010 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:43:11.363012 kubelet[2010]: I0209 09:43:11.362993 2010 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:43:11.363571 kubelet[2010]: I0209 09:43:11.363068 2010 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:43:11.364212 kubelet[2010]: I0209 09:43:11.364175 2010 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:43:11.365007 kubelet[2010]: I0209 09:43:11.364967 2010 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:43:11.377906 kubelet[2010]: I0209 09:43:11.377883 2010 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:43:11.406066 kubelet[2010]: I0209 09:43:11.406040 2010 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:43:11.406066 kubelet[2010]: I0209 09:43:11.406066 2010 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:43:11.406066 kubelet[2010]: I0209 09:43:11.406083 2010 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:43:11.406249 kubelet[2010]: E0209 09:43:11.406143 2010 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:43:11.420469 sudo[2063]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:43:11.420675 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:43:11.439078 kubelet[2010]: I0209 09:43:11.439046 2010 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:43:11.439078 kubelet[2010]: I0209 09:43:11.439070 2010 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:43:11.439244 kubelet[2010]: I0209 09:43:11.439090 2010 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:11.439244 kubelet[2010]: I0209 09:43:11.439230 2010 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:43:11.439244 kubelet[2010]: I0209 09:43:11.439243 2010 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:43:11.439346 kubelet[2010]: I0209 09:43:11.439249 2010 policy_none.go:49] "None policy: Start" Feb 9 09:43:11.440074 kubelet[2010]: I0209 09:43:11.440047 2010 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:43:11.440140 kubelet[2010]: I0209 09:43:11.440079 2010 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:43:11.440214 kubelet[2010]: I0209 09:43:11.440196 2010 state_mem.go:75] "Updated machine memory state" Feb 9 09:43:11.443782 kubelet[2010]: I0209 09:43:11.443754 2010 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:43:11.444063 kubelet[2010]: I0209 09:43:11.444039 2010 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:43:11.466592 kubelet[2010]: I0209 09:43:11.466557 2010 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:11.475294 kubelet[2010]: I0209 09:43:11.475248 2010 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 09:43:11.475440 kubelet[2010]: I0209 09:43:11.475368 2010 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:43:11.507300 kubelet[2010]: I0209 09:43:11.507237 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:11.507460 kubelet[2010]: I0209 09:43:11.507355 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:11.507610 kubelet[2010]: I0209 09:43:11.507587 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:11.514548 kubelet[2010]: E0209 09:43:11.514512 2010 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:11.565564 kubelet[2010]: I0209 09:43:11.565525 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23075114f7af2852f5e6d80c32dc41c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23075114f7af2852f5e6d80c32dc41c7\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:11.565564 kubelet[2010]: I0209 09:43:11.565565 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23075114f7af2852f5e6d80c32dc41c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23075114f7af2852f5e6d80c32dc41c7\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:11.565744 kubelet[2010]: I0209 09:43:11.565587 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:11.565744 kubelet[2010]: I0209 09:43:11.565608 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:11.565744 kubelet[2010]: I0209 09:43:11.565628 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:11.565744 kubelet[2010]: I0209 09:43:11.565649 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:11.565744 kubelet[2010]: I0209 09:43:11.565671 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:43:11.565857 kubelet[2010]: I0209 09:43:11.565689 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23075114f7af2852f5e6d80c32dc41c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23075114f7af2852f5e6d80c32dc41c7\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:11.565857 kubelet[2010]: I0209 09:43:11.565709 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:11.763157 kubelet[2010]: E0209 09:43:11.763053 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.812306 kubelet[2010]: E0209 09:43:11.812260 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.815133 kubelet[2010]: E0209 09:43:11.815106 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.873244 sudo[2063]: pam_unix(sudo:session): session closed for user root Feb 9 09:43:12.357956 kubelet[2010]: I0209 09:43:12.357910 2010 apiserver.go:52] "Watching apiserver" Feb 9 09:43:12.368424 kubelet[2010]: I0209 09:43:12.368381 2010 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:43:12.371092 kubelet[2010]: I0209 09:43:12.371063 2010 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:43:12.762636 kubelet[2010]: E0209 09:43:12.762535 2010 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:12.763001 kubelet[2010]: E0209 09:43:12.762964 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:12.962469 kubelet[2010]: E0209 09:43:12.962434 2010 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:12.963050 kubelet[2010]: E0209 09:43:12.963025 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:13.161690 kubelet[2010]: E0209 09:43:13.161632 2010 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 09:43:13.161946 kubelet[2010]: E0209 09:43:13.161931 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:13.370469 kubelet[2010]: I0209 09:43:13.370433 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.370396067 pod.CreationTimestamp="2024-02-09 09:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:13.370147079 +0000 UTC m=+2.087119030" watchObservedRunningTime="2024-02-09 09:43:13.370396067 +0000 UTC m=+2.087368018" Feb 9 09:43:13.418377 kubelet[2010]: E0209 09:43:13.417679 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:13.418377 kubelet[2010]: E0209 09:43:13.417830 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:13.418377 kubelet[2010]: E0209 09:43:13.418169 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:13.636683 sudo[1237]: pam_unix(sudo:session): session closed for user root Feb 9 09:43:13.638003 sshd[1233]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:13.640305 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:37128.service: Deactivated successfully. Feb 9 09:43:13.641009 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:43:13.641174 systemd[1]: session-5.scope: Consumed 6.535s CPU time. Feb 9 09:43:13.641579 systemd-logind[1128]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:43:13.642223 systemd-logind[1128]: Removed session 5. Feb 9 09:43:14.162215 kubelet[2010]: I0209 09:43:14.162171 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.162136237 pod.CreationTimestamp="2024-02-09 09:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:13.762308742 +0000 UTC m=+2.479280693" watchObservedRunningTime="2024-02-09 09:43:14.162136237 +0000 UTC m=+2.879108188" Feb 9 09:43:14.162360 kubelet[2010]: I0209 09:43:14.162303 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.162277433 pod.CreationTimestamp="2024-02-09 09:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:14.162130635 +0000 UTC m=+2.879102586" watchObservedRunningTime="2024-02-09 09:43:14.162277433 +0000 UTC m=+2.879249384" Feb 9 09:43:17.269704 kubelet[2010]: E0209 09:43:17.269661 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:17.422982 kubelet[2010]: E0209 09:43:17.422940 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:18.424427 kubelet[2010]: E0209 09:43:18.424385 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:18.970756 kubelet[2010]: E0209 09:43:18.970720 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:19.425173 kubelet[2010]: E0209 09:43:19.425137 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:21.180565 kubelet[2010]: E0209 09:43:21.180532 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:21.428412 kubelet[2010]: E0209 09:43:21.428379 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:23.001331 update_engine[1131]: I0209 09:43:23.001269 1131 update_attempter.cc:509] Updating boot flags... Feb 9 09:43:24.998728 kubelet[2010]: I0209 09:43:24.998684 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:25.004045 systemd[1]: Created slice kubepods-besteffort-pod99b43033_3a61_4c85_a01e_4f12e3a78c40.slice. Feb 9 09:43:25.043522 kubelet[2010]: I0209 09:43:25.043483 2010 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:43:25.043841 env[1142]: time="2024-02-09T09:43:25.043791830Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:43:25.044091 kubelet[2010]: I0209 09:43:25.043961 2010 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:43:25.062363 kubelet[2010]: I0209 09:43:25.062326 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:25.064261 kubelet[2010]: I0209 09:43:25.064228 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99b43033-3a61-4c85-a01e-4f12e3a78c40-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-8kcdb\" (UID: \"99b43033-3a61-4c85-a01e-4f12e3a78c40\") " pod="kube-system/cilium-operator-f59cbd8c6-8kcdb" Feb 9 09:43:25.064442 kubelet[2010]: I0209 09:43:25.064428 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwv4\" (UniqueName: \"kubernetes.io/projected/99b43033-3a61-4c85-a01e-4f12e3a78c40-kube-api-access-jwwv4\") pod \"cilium-operator-f59cbd8c6-8kcdb\" (UID: \"99b43033-3a61-4c85-a01e-4f12e3a78c40\") " pod="kube-system/cilium-operator-f59cbd8c6-8kcdb" Feb 9 09:43:25.067270 systemd[1]: Created slice kubepods-besteffort-pod4c701cd7_a918_4c9c_81c4_2298642f2e9b.slice. Feb 9 09:43:25.074853 kubelet[2010]: I0209 09:43:25.074819 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:25.084764 systemd[1]: Created slice kubepods-burstable-pod537b2d31_c4ca_4f3f_ace3_1c3bf6e38078.slice. Feb 9 09:43:25.165099 kubelet[2010]: I0209 09:43:25.165071 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-config-path\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.165325 kubelet[2010]: I0209 09:43:25.165312 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkhnz\" (UniqueName: \"kubernetes.io/projected/4c701cd7-a918-4c9c-81c4-2298642f2e9b-kube-api-access-zkhnz\") pod \"kube-proxy-s5pqm\" (UID: \"4c701cd7-a918-4c9c-81c4-2298642f2e9b\") " pod="kube-system/kube-proxy-s5pqm" Feb 9 09:43:25.165420 kubelet[2010]: I0209 09:43:25.165409 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-lib-modules\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.165537 kubelet[2010]: I0209 09:43:25.165525 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-bpf-maps\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.165646 kubelet[2010]: I0209 09:43:25.165635 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cni-path\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.165745 kubelet[2010]: I0209 09:43:25.165735 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-clustermesh-secrets\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.165862 kubelet[2010]: I0209 09:43:25.165837 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-hubble-tls\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.165910 kubelet[2010]: I0209 09:43:25.165899 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-run\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.165940 kubelet[2010]: I0209 09:43:25.165924 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-host-proc-sys-kernel\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.165975 kubelet[2010]: I0209 09:43:25.165948 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-etc-cni-netd\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.166004 kubelet[2010]: I0209 09:43:25.165986 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c701cd7-a918-4c9c-81c4-2298642f2e9b-kube-proxy\") pod \"kube-proxy-s5pqm\" (UID: \"4c701cd7-a918-4c9c-81c4-2298642f2e9b\") " pod="kube-system/kube-proxy-s5pqm" Feb 9 09:43:25.166032 kubelet[2010]: I0209 09:43:25.166010 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-cgroup\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.166032 kubelet[2010]: I0209 09:43:25.166031 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c701cd7-a918-4c9c-81c4-2298642f2e9b-xtables-lock\") pod \"kube-proxy-s5pqm\" (UID: \"4c701cd7-a918-4c9c-81c4-2298642f2e9b\") " pod="kube-system/kube-proxy-s5pqm" Feb 9 09:43:25.166078 kubelet[2010]: I0209 09:43:25.166053 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-host-proc-sys-net\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.166078 kubelet[2010]: I0209 09:43:25.166076 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-xtables-lock\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.166125 kubelet[2010]: I0209 09:43:25.166097 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlq2\" (UniqueName: \"kubernetes.io/projected/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-kube-api-access-5jlq2\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.166148 kubelet[2010]: I0209 09:43:25.166128 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c701cd7-a918-4c9c-81c4-2298642f2e9b-lib-modules\") pod \"kube-proxy-s5pqm\" (UID: \"4c701cd7-a918-4c9c-81c4-2298642f2e9b\") " pod="kube-system/kube-proxy-s5pqm" Feb 9 09:43:25.166172 kubelet[2010]: I0209 09:43:25.166151 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-hostproc\") pod \"cilium-crz4q\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " pod="kube-system/cilium-crz4q" Feb 9 09:43:25.613874 kubelet[2010]: E0209 09:43:25.613851 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:25.614479 env[1142]: time="2024-02-09T09:43:25.614427101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-8kcdb,Uid:99b43033-3a61-4c85-a01e-4f12e3a78c40,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:25.644438 env[1142]: time="2024-02-09T09:43:25.644371391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:25.644438 env[1142]: time="2024-02-09T09:43:25.644410516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:25.644438 env[1142]: time="2024-02-09T09:43:25.644420838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:25.644606 env[1142]: time="2024-02-09T09:43:25.644540855Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714 pid=2140 runtime=io.containerd.runc.v2 Feb 9 09:43:25.654687 systemd[1]: Started cri-containerd-baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714.scope. Feb 9 09:43:25.686818 kubelet[2010]: E0209 09:43:25.686701 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:25.688671 env[1142]: time="2024-02-09T09:43:25.687466262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-crz4q,Uid:537b2d31-c4ca-4f3f-ace3-1c3bf6e38078,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:25.695846 env[1142]: time="2024-02-09T09:43:25.695809388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-8kcdb,Uid:99b43033-3a61-4c85-a01e-4f12e3a78c40,Namespace:kube-system,Attempt:0,} returns sandbox id \"baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714\"" Feb 9 09:43:25.696546 kubelet[2010]: E0209 09:43:25.696527 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:25.698955 env[1142]: time="2024-02-09T09:43:25.698921798Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:43:25.704753 env[1142]: time="2024-02-09T09:43:25.704675630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:25.704753 env[1142]: time="2024-02-09T09:43:25.704716276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:25.704753 env[1142]: time="2024-02-09T09:43:25.704727798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:25.704906 env[1142]: time="2024-02-09T09:43:25.704858297Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c pid=2179 runtime=io.containerd.runc.v2 Feb 9 09:43:25.714758 systemd[1]: Started cri-containerd-bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c.scope. Feb 9 09:43:25.750204 env[1142]: time="2024-02-09T09:43:25.750165168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-crz4q,Uid:537b2d31-c4ca-4f3f-ace3-1c3bf6e38078,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\"" Feb 9 09:43:25.750973 kubelet[2010]: E0209 09:43:25.750954 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:25.969925 kubelet[2010]: E0209 09:43:25.969791 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:25.970740 env[1142]: time="2024-02-09T09:43:25.970216306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5pqm,Uid:4c701cd7-a918-4c9c-81c4-2298642f2e9b,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:25.983490 env[1142]: time="2024-02-09T09:43:25.983420576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:25.983490 env[1142]: time="2024-02-09T09:43:25.983459861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:25.983490 env[1142]: time="2024-02-09T09:43:25.983470223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:25.983683 env[1142]: time="2024-02-09T09:43:25.983619564Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdbb5f0929113ff144206fc19a1a6ea299c03a5bd14e8241d6a2d502fb492aec pid=2220 runtime=io.containerd.runc.v2 Feb 9 09:43:25.993828 systemd[1]: Started cri-containerd-fdbb5f0929113ff144206fc19a1a6ea299c03a5bd14e8241d6a2d502fb492aec.scope. Feb 9 09:43:26.023431 env[1142]: time="2024-02-09T09:43:26.023384044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5pqm,Uid:4c701cd7-a918-4c9c-81c4-2298642f2e9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdbb5f0929113ff144206fc19a1a6ea299c03a5bd14e8241d6a2d502fb492aec\"" Feb 9 09:43:26.023998 kubelet[2010]: E0209 09:43:26.023977 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:26.031383 env[1142]: time="2024-02-09T09:43:26.031328059Z" level=info msg="CreateContainer within sandbox \"fdbb5f0929113ff144206fc19a1a6ea299c03a5bd14e8241d6a2d502fb492aec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:43:26.043938 env[1142]: time="2024-02-09T09:43:26.043875588Z" level=info msg="CreateContainer within sandbox \"fdbb5f0929113ff144206fc19a1a6ea299c03a5bd14e8241d6a2d502fb492aec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3eab8c6cedfcabbaa4b286ecb5ddcf0cb3debbaa633c87070afc17a403f88e7c\"" Feb 9 09:43:26.044967 env[1142]: time="2024-02-09T09:43:26.044936854Z" level=info msg="StartContainer for \"3eab8c6cedfcabbaa4b286ecb5ddcf0cb3debbaa633c87070afc17a403f88e7c\"" Feb 9 09:43:26.059239 systemd[1]: Started cri-containerd-3eab8c6cedfcabbaa4b286ecb5ddcf0cb3debbaa633c87070afc17a403f88e7c.scope. Feb 9 09:43:26.102298 env[1142]: time="2024-02-09T09:43:26.102239710Z" level=info msg="StartContainer for \"3eab8c6cedfcabbaa4b286ecb5ddcf0cb3debbaa633c87070afc17a403f88e7c\" returns successfully" Feb 9 09:43:26.436052 kubelet[2010]: E0209 09:43:26.436008 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:26.447106 kubelet[2010]: I0209 09:43:26.447065 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s5pqm" podStartSLOduration=1.447027421 pod.CreationTimestamp="2024-02-09 09:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:26.445640029 +0000 UTC m=+15.162611980" watchObservedRunningTime="2024-02-09 09:43:26.447027421 +0000 UTC m=+15.163999332" Feb 9 09:43:26.671234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674400670.mount: Deactivated successfully. Feb 9 09:43:27.122783 env[1142]: time="2024-02-09T09:43:27.122734957Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:27.124133 env[1142]: time="2024-02-09T09:43:27.124093015Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:27.126268 env[1142]: time="2024-02-09T09:43:27.126228456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:27.126903 env[1142]: time="2024-02-09T09:43:27.126870420Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:43:27.127838 env[1142]: time="2024-02-09T09:43:27.127802103Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:43:27.130777 env[1142]: time="2024-02-09T09:43:27.130720246Z" level=info msg="CreateContainer within sandbox \"baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:43:27.140233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910994790.mount: Deactivated successfully. Feb 9 09:43:27.142449 env[1142]: time="2024-02-09T09:43:27.142413983Z" level=info msg="CreateContainer within sandbox \"baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\"" Feb 9 09:43:27.142892 env[1142]: time="2024-02-09T09:43:27.142868803Z" level=info msg="StartContainer for \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\"" Feb 9 09:43:27.157062 systemd[1]: Started cri-containerd-741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1.scope. Feb 9 09:43:27.194504 env[1142]: time="2024-02-09T09:43:27.194448101Z" level=info msg="StartContainer for \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\" returns successfully" Feb 9 09:43:27.443789 kubelet[2010]: E0209 09:43:27.443671 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:27.444731 kubelet[2010]: E0209 09:43:27.444685 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:28.449601 kubelet[2010]: E0209 09:43:28.449469 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:31.419869 kubelet[2010]: I0209 09:43:31.419826 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-8kcdb" podStartSLOduration=-9.223372029435856e+09 pod.CreationTimestamp="2024-02-09 09:43:24 +0000 UTC" firstStartedPulling="2024-02-09 09:43:25.698524541 +0000 UTC m=+14.415496452" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:27.453099814 +0000 UTC m=+16.170071765" watchObservedRunningTime="2024-02-09 09:43:31.418920296 +0000 UTC m=+20.135892247" Feb 9 09:43:32.449609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239517619.mount: Deactivated successfully. Feb 9 09:43:34.804939 env[1142]: time="2024-02-09T09:43:34.804889987Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:34.806439 env[1142]: time="2024-02-09T09:43:34.806403614Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:34.808031 env[1142]: time="2024-02-09T09:43:34.808008129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:34.809362 env[1142]: time="2024-02-09T09:43:34.809319456Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:43:34.811836 env[1142]: time="2024-02-09T09:43:34.811775293Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:43:34.821048 env[1142]: time="2024-02-09T09:43:34.820991464Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\"" Feb 9 09:43:34.821665 env[1142]: time="2024-02-09T09:43:34.821514675Z" level=info msg="StartContainer for \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\"" Feb 9 09:43:34.841774 systemd[1]: Started cri-containerd-3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5.scope. Feb 9 09:43:34.894458 env[1142]: time="2024-02-09T09:43:34.894413842Z" level=info msg="StartContainer for \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\" returns successfully" Feb 9 09:43:34.926953 systemd[1]: cri-containerd-3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5.scope: Deactivated successfully. Feb 9 09:43:35.098381 env[1142]: time="2024-02-09T09:43:35.098233100Z" level=info msg="shim disconnected" id=3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5 Feb 9 09:43:35.098610 env[1142]: time="2024-02-09T09:43:35.098588333Z" level=warning msg="cleaning up after shim disconnected" id=3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5 namespace=k8s.io Feb 9 09:43:35.098689 env[1142]: time="2024-02-09T09:43:35.098674781Z" level=info msg="cleaning up dead shim" Feb 9 09:43:35.105457 env[1142]: time="2024-02-09T09:43:35.105425808Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:43:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2486 runtime=io.containerd.runc.v2\n" Feb 9 09:43:35.462239 kubelet[2010]: E0209 09:43:35.462127 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:35.464426 env[1142]: time="2024-02-09T09:43:35.464377862Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:43:35.563087 env[1142]: time="2024-02-09T09:43:35.563028383Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\"" Feb 9 09:43:35.563634 env[1142]: time="2024-02-09T09:43:35.563606637Z" level=info msg="StartContainer for \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\"" Feb 9 09:43:35.576847 systemd[1]: Started cri-containerd-ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209.scope. Feb 9 09:43:35.612607 env[1142]: time="2024-02-09T09:43:35.612562663Z" level=info msg="StartContainer for \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\" returns successfully" Feb 9 09:43:35.623766 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:43:35.623959 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:43:35.624131 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:43:35.625708 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:43:35.626898 systemd[1]: cri-containerd-ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209.scope: Deactivated successfully. Feb 9 09:43:35.635228 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:43:35.649168 env[1142]: time="2024-02-09T09:43:35.649106297Z" level=info msg="shim disconnected" id=ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209 Feb 9 09:43:35.649168 env[1142]: time="2024-02-09T09:43:35.649168703Z" level=warning msg="cleaning up after shim disconnected" id=ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209 namespace=k8s.io Feb 9 09:43:35.649391 env[1142]: time="2024-02-09T09:43:35.649178824Z" level=info msg="cleaning up dead shim" Feb 9 09:43:35.656374 env[1142]: time="2024-02-09T09:43:35.656312726Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:43:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2548 runtime=io.containerd.runc.v2\n" Feb 9 09:43:35.818613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5-rootfs.mount: Deactivated successfully. Feb 9 09:43:36.468014 kubelet[2010]: E0209 09:43:36.465521 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:36.479886 env[1142]: time="2024-02-09T09:43:36.479530913Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:43:36.493327 env[1142]: time="2024-02-09T09:43:36.493235937Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\"" Feb 9 09:43:36.494479 env[1142]: time="2024-02-09T09:43:36.494447965Z" level=info msg="StartContainer for \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\"" Feb 9 09:43:36.515787 systemd[1]: Started cri-containerd-f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056.scope. Feb 9 09:43:36.559589 env[1142]: time="2024-02-09T09:43:36.559534577Z" level=info msg="StartContainer for \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\" returns successfully" Feb 9 09:43:36.559651 systemd[1]: cri-containerd-f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056.scope: Deactivated successfully. Feb 9 09:43:36.580592 env[1142]: time="2024-02-09T09:43:36.580538573Z" level=info msg="shim disconnected" id=f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056 Feb 9 09:43:36.580592 env[1142]: time="2024-02-09T09:43:36.580586617Z" level=warning msg="cleaning up after shim disconnected" id=f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056 namespace=k8s.io Feb 9 09:43:36.580592 env[1142]: time="2024-02-09T09:43:36.580597658Z" level=info msg="cleaning up dead shim" Feb 9 09:43:36.588904 env[1142]: time="2024-02-09T09:43:36.588842394Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:43:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2606 runtime=io.containerd.runc.v2\n" Feb 9 09:43:36.818583 systemd[1]: run-containerd-runc-k8s.io-f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056-runc.1IUhU5.mount: Deactivated successfully. Feb 9 09:43:36.818689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056-rootfs.mount: Deactivated successfully. Feb 9 09:43:37.470585 kubelet[2010]: E0209 09:43:37.469541 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:37.475703 env[1142]: time="2024-02-09T09:43:37.473083070Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:43:37.486455 env[1142]: time="2024-02-09T09:43:37.486036063Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\"" Feb 9 09:43:37.487078 env[1142]: time="2024-02-09T09:43:37.487049710Z" level=info msg="StartContainer for \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\"" Feb 9 09:43:37.504110 systemd[1]: Started cri-containerd-64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900.scope. Feb 9 09:43:37.570097 systemd[1]: cri-containerd-64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900.scope: Deactivated successfully. Feb 9 09:43:37.571494 env[1142]: time="2024-02-09T09:43:37.571428203Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod537b2d31_c4ca_4f3f_ace3_1c3bf6e38078.slice/cri-containerd-64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900.scope/memory.events\": no such file or directory" Feb 9 09:43:37.572809 env[1142]: time="2024-02-09T09:43:37.572766198Z" level=info msg="StartContainer for \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\" returns successfully" Feb 9 09:43:37.591538 env[1142]: time="2024-02-09T09:43:37.591490608Z" level=info msg="shim disconnected" id=64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900 Feb 9 09:43:37.591796 env[1142]: time="2024-02-09T09:43:37.591776272Z" level=warning msg="cleaning up after shim disconnected" id=64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900 namespace=k8s.io Feb 9 09:43:37.591889 env[1142]: time="2024-02-09T09:43:37.591874521Z" level=info msg="cleaning up dead shim" Feb 9 09:43:37.602908 env[1142]: time="2024-02-09T09:43:37.602864145Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:43:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2662 runtime=io.containerd.runc.v2\n" Feb 9 09:43:37.818644 systemd[1]: run-containerd-runc-k8s.io-64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900-runc.tEd4nS.mount: Deactivated successfully. Feb 9 09:43:37.818758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900-rootfs.mount: Deactivated successfully. Feb 9 09:43:38.473308 kubelet[2010]: E0209 09:43:38.473271 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:38.475803 env[1142]: time="2024-02-09T09:43:38.475752046Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:43:38.488237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272947264.mount: Deactivated successfully. Feb 9 09:43:38.493833 env[1142]: time="2024-02-09T09:43:38.493791580Z" level=info msg="CreateContainer within sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\"" Feb 9 09:43:38.494587 env[1142]: time="2024-02-09T09:43:38.494555683Z" level=info msg="StartContainer for \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\"" Feb 9 09:43:38.508146 systemd[1]: Started cri-containerd-9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c.scope. Feb 9 09:43:38.572380 env[1142]: time="2024-02-09T09:43:38.572329285Z" level=info msg="StartContainer for \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\" returns successfully" Feb 9 09:43:38.730732 kubelet[2010]: I0209 09:43:38.730626 2010 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:43:38.821153 kubelet[2010]: I0209 09:43:38.821121 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:38.826734 systemd[1]: Created slice kubepods-burstable-pod5bea2de5_c3e7_41b4_80fb_523511ad7bd9.slice. Feb 9 09:43:38.828793 kubelet[2010]: I0209 09:43:38.828765 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:38.832887 systemd[1]: Created slice kubepods-burstable-podc56c1a62_a894_4ff7_aecd_68fc111d7b3c.slice. Feb 9 09:43:38.842317 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:43:38.966253 kubelet[2010]: I0209 09:43:38.966219 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k87gl\" (UniqueName: \"kubernetes.io/projected/c56c1a62-a894-4ff7-aecd-68fc111d7b3c-kube-api-access-k87gl\") pod \"coredns-787d4945fb-78gbg\" (UID: \"c56c1a62-a894-4ff7-aecd-68fc111d7b3c\") " pod="kube-system/coredns-787d4945fb-78gbg" Feb 9 09:43:38.966253 kubelet[2010]: I0209 09:43:38.966263 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvjc\" (UniqueName: \"kubernetes.io/projected/5bea2de5-c3e7-41b4-80fb-523511ad7bd9-kube-api-access-qwvjc\") pod \"coredns-787d4945fb-95kp5\" (UID: \"5bea2de5-c3e7-41b4-80fb-523511ad7bd9\") " pod="kube-system/coredns-787d4945fb-95kp5" Feb 9 09:43:38.966446 kubelet[2010]: I0209 09:43:38.966355 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bea2de5-c3e7-41b4-80fb-523511ad7bd9-config-volume\") pod \"coredns-787d4945fb-95kp5\" (UID: \"5bea2de5-c3e7-41b4-80fb-523511ad7bd9\") " pod="kube-system/coredns-787d4945fb-95kp5" Feb 9 09:43:38.966446 kubelet[2010]: I0209 09:43:38.966396 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c56c1a62-a894-4ff7-aecd-68fc111d7b3c-config-volume\") pod \"coredns-787d4945fb-78gbg\" (UID: \"c56c1a62-a894-4ff7-aecd-68fc111d7b3c\") " pod="kube-system/coredns-787d4945fb-78gbg" Feb 9 09:43:39.073308 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:43:39.129180 kubelet[2010]: E0209 09:43:39.129144 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:39.129990 env[1142]: time="2024-02-09T09:43:39.129945167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-95kp5,Uid:5bea2de5-c3e7-41b4-80fb-523511ad7bd9,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:39.135423 kubelet[2010]: E0209 09:43:39.135393 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:39.136107 env[1142]: time="2024-02-09T09:43:39.136066736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-78gbg,Uid:c56c1a62-a894-4ff7-aecd-68fc111d7b3c,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:39.477903 kubelet[2010]: E0209 09:43:39.477617 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:39.493085 kubelet[2010]: I0209 09:43:39.493026 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-crz4q" podStartSLOduration=-9.223372022361784e+09 pod.CreationTimestamp="2024-02-09 09:43:25 +0000 UTC" firstStartedPulling="2024-02-09 09:43:25.752399091 +0000 UTC m=+14.469371042" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:39.492907762 +0000 UTC m=+28.209879713" watchObservedRunningTime="2024-02-09 09:43:39.492990889 +0000 UTC m=+28.209962840" Feb 9 09:43:40.478974 kubelet[2010]: E0209 09:43:40.478914 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:40.699172 systemd-networkd[1046]: cilium_host: Link UP Feb 9 09:43:40.699778 systemd-networkd[1046]: cilium_net: Link UP Feb 9 09:43:40.701065 systemd-networkd[1046]: cilium_net: Gained carrier Feb 9 09:43:40.701361 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:43:40.701423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:43:40.701509 systemd-networkd[1046]: cilium_host: Gained carrier Feb 9 09:43:40.783215 systemd-networkd[1046]: cilium_vxlan: Link UP Feb 9 09:43:40.783221 systemd-networkd[1046]: cilium_vxlan: Gained carrier Feb 9 09:43:41.061312 kernel: NET: Registered PF_ALG protocol family Feb 9 09:43:41.100530 systemd-networkd[1046]: cilium_host: Gained IPv6LL Feb 9 09:43:41.140460 systemd-networkd[1046]: cilium_net: Gained IPv6LL Feb 9 09:43:41.480037 kubelet[2010]: E0209 09:43:41.479943 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:41.628965 systemd-networkd[1046]: lxc_health: Link UP Feb 9 09:43:41.635673 systemd-networkd[1046]: lxc_health: Gained carrier Feb 9 09:43:41.636321 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:43:42.226376 systemd-networkd[1046]: lxc0df458c1b26b: Link UP Feb 9 09:43:42.236510 systemd-networkd[1046]: lxcff059620bf09: Link UP Feb 9 09:43:42.237319 kernel: eth0: renamed from tmpa9ed5 Feb 9 09:43:42.254319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:43:42.254410 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0df458c1b26b: link becomes ready Feb 9 09:43:42.254299 systemd-networkd[1046]: lxc0df458c1b26b: Gained carrier Feb 9 09:43:42.256460 kernel: eth0: renamed from tmp2e50b Feb 9 09:43:42.261314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcff059620bf09: link becomes ready Feb 9 09:43:42.261261 systemd-networkd[1046]: lxcff059620bf09: Gained carrier Feb 9 09:43:42.481465 kubelet[2010]: E0209 09:43:42.481380 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:42.572435 systemd-networkd[1046]: cilium_vxlan: Gained IPv6LL Feb 9 09:43:43.414805 systemd-networkd[1046]: lxc_health: Gained IPv6LL Feb 9 09:43:43.415068 systemd-networkd[1046]: lxc0df458c1b26b: Gained IPv6LL Feb 9 09:43:43.482970 kubelet[2010]: E0209 09:43:43.482942 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:44.172413 systemd-networkd[1046]: lxcff059620bf09: Gained IPv6LL Feb 9 09:43:44.484587 kubelet[2010]: E0209 09:43:44.484495 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:45.787034 env[1142]: time="2024-02-09T09:43:45.786961165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:45.787034 env[1142]: time="2024-02-09T09:43:45.787034370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:45.787435 env[1142]: time="2024-02-09T09:43:45.787062132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:45.787435 env[1142]: time="2024-02-09T09:43:45.787263825Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9ed5cb9683bffba56b72851449fc5f71cc888a8a27aaef8ee69bd694664733e pid=3231 runtime=io.containerd.runc.v2 Feb 9 09:43:45.793349 env[1142]: time="2024-02-09T09:43:45.793254939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:45.793349 env[1142]: time="2024-02-09T09:43:45.793320823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:45.793349 env[1142]: time="2024-02-09T09:43:45.793331784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:45.796259 env[1142]: time="2024-02-09T09:43:45.796208813Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e50ba344148d2b268ec41ddf8ed22ed749d9df62cd72e0921e162747a61ba6a pid=3249 runtime=io.containerd.runc.v2 Feb 9 09:43:45.805677 systemd[1]: run-containerd-runc-k8s.io-a9ed5cb9683bffba56b72851449fc5f71cc888a8a27aaef8ee69bd694664733e-runc.3MkEEH.mount: Deactivated successfully. Feb 9 09:43:45.807989 systemd[1]: Started cri-containerd-a9ed5cb9683bffba56b72851449fc5f71cc888a8a27aaef8ee69bd694664733e.scope. Feb 9 09:43:45.822482 systemd[1]: Started cri-containerd-2e50ba344148d2b268ec41ddf8ed22ed749d9df62cd72e0921e162747a61ba6a.scope. Feb 9 09:43:45.877448 systemd-resolved[1088]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:43:45.880408 systemd-resolved[1088]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:43:45.898217 env[1142]: time="2024-02-09T09:43:45.897863495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-78gbg,Uid:c56c1a62-a894-4ff7-aecd-68fc111d7b3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9ed5cb9683bffba56b72851449fc5f71cc888a8a27aaef8ee69bd694664733e\"" Feb 9 09:43:45.898558 kubelet[2010]: E0209 09:43:45.898539 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:45.903242 env[1142]: time="2024-02-09T09:43:45.903202126Z" level=info msg="CreateContainer within sandbox \"a9ed5cb9683bffba56b72851449fc5f71cc888a8a27aaef8ee69bd694664733e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:43:45.904870 env[1142]: time="2024-02-09T09:43:45.904837274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-95kp5,Uid:5bea2de5-c3e7-41b4-80fb-523511ad7bd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e50ba344148d2b268ec41ddf8ed22ed749d9df62cd72e0921e162747a61ba6a\"" Feb 9 09:43:45.905540 kubelet[2010]: E0209 09:43:45.905519 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:45.907154 env[1142]: time="2024-02-09T09:43:45.907123224Z" level=info msg="CreateContainer within sandbox \"2e50ba344148d2b268ec41ddf8ed22ed749d9df62cd72e0921e162747a61ba6a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:43:45.917364 env[1142]: time="2024-02-09T09:43:45.917255890Z" level=info msg="CreateContainer within sandbox \"a9ed5cb9683bffba56b72851449fc5f71cc888a8a27aaef8ee69bd694664733e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ae7f0b5c0307164ba3e4e5b1468c4ee6597bda250ece901b97b2a479b479418\"" Feb 9 09:43:45.918519 env[1142]: time="2024-02-09T09:43:45.918488291Z" level=info msg="StartContainer for \"0ae7f0b5c0307164ba3e4e5b1468c4ee6597bda250ece901b97b2a479b479418\"" Feb 9 09:43:45.919719 env[1142]: time="2024-02-09T09:43:45.919670649Z" level=info msg="CreateContainer within sandbox \"2e50ba344148d2b268ec41ddf8ed22ed749d9df62cd72e0921e162747a61ba6a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b208c92394b46f4f8a23488b76ccc36b0e5d4297edfbbd0b6ad9276d4a6be905\"" Feb 9 09:43:45.920075 env[1142]: time="2024-02-09T09:43:45.920050714Z" level=info msg="StartContainer for \"b208c92394b46f4f8a23488b76ccc36b0e5d4297edfbbd0b6ad9276d4a6be905\"" Feb 9 09:43:45.939592 systemd[1]: Started cri-containerd-b208c92394b46f4f8a23488b76ccc36b0e5d4297edfbbd0b6ad9276d4a6be905.scope. Feb 9 09:43:45.943950 systemd[1]: Started cri-containerd-0ae7f0b5c0307164ba3e4e5b1468c4ee6597bda250ece901b97b2a479b479418.scope. Feb 9 09:43:45.983957 env[1142]: time="2024-02-09T09:43:45.983894710Z" level=info msg="StartContainer for \"0ae7f0b5c0307164ba3e4e5b1468c4ee6597bda250ece901b97b2a479b479418\" returns successfully" Feb 9 09:43:45.997973 env[1142]: time="2024-02-09T09:43:45.997922152Z" level=info msg="StartContainer for \"b208c92394b46f4f8a23488b76ccc36b0e5d4297edfbbd0b6ad9276d4a6be905\" returns successfully" Feb 9 09:43:46.488831 kubelet[2010]: E0209 09:43:46.488797 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:46.490640 kubelet[2010]: E0209 09:43:46.490611 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:46.499590 kubelet[2010]: I0209 09:43:46.499556 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-78gbg" podStartSLOduration=22.499524593 pod.CreationTimestamp="2024-02-09 09:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:46.498567771 +0000 UTC m=+35.215539722" watchObservedRunningTime="2024-02-09 09:43:46.499524593 +0000 UTC m=+35.216496544" Feb 9 09:43:46.523704 kubelet[2010]: I0209 09:43:46.523672 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-95kp5" podStartSLOduration=22.523627412 pod.CreationTimestamp="2024-02-09 09:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:46.522629188 +0000 UTC m=+35.239601139" watchObservedRunningTime="2024-02-09 09:43:46.523627412 +0000 UTC m=+35.240599323" Feb 9 09:43:47.492711 kubelet[2010]: E0209 09:43:47.492677 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:47.493164 kubelet[2010]: E0209 09:43:47.493151 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:48.494932 kubelet[2010]: E0209 09:43:48.494903 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:48.495228 kubelet[2010]: E0209 09:43:48.494941 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:50.762553 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:36438.service. Feb 9 09:43:50.810187 sshd[3436]: Accepted publickey for core from 10.0.0.1 port 36438 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:43:50.811492 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:50.814735 systemd-logind[1128]: New session 6 of user core. Feb 9 09:43:50.815584 systemd[1]: Started session-6.scope. Feb 9 09:43:50.987685 sshd[3436]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:50.990262 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:36438.service: Deactivated successfully. Feb 9 09:43:50.991006 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:43:50.991566 systemd-logind[1128]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:43:50.992166 systemd-logind[1128]: Removed session 6. Feb 9 09:43:55.992540 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:42866.service. Feb 9 09:43:56.039246 sshd[3452]: Accepted publickey for core from 10.0.0.1 port 42866 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:43:56.041188 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:56.045349 systemd-logind[1128]: New session 7 of user core. Feb 9 09:43:56.045545 systemd[1]: Started session-7.scope. Feb 9 09:43:56.163211 sshd[3452]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:56.166016 systemd-logind[1128]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:43:56.166237 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:42866.service: Deactivated successfully. Feb 9 09:43:56.167069 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:43:56.167781 systemd-logind[1128]: Removed session 7. Feb 9 09:44:01.168512 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:42872.service. Feb 9 09:44:01.212895 sshd[3468]: Accepted publickey for core from 10.0.0.1 port 42872 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:01.214175 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:01.218331 systemd-logind[1128]: New session 8 of user core. Feb 9 09:44:01.219109 systemd[1]: Started session-8.scope. Feb 9 09:44:01.336495 sshd[3468]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:01.339407 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:42872.service: Deactivated successfully. Feb 9 09:44:01.340173 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:44:01.343192 systemd-logind[1128]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:44:01.344106 systemd-logind[1128]: Removed session 8. Feb 9 09:44:06.340976 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:52238.service. Feb 9 09:44:06.391266 sshd[3483]: Accepted publickey for core from 10.0.0.1 port 52238 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:06.392972 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:06.400236 systemd-logind[1128]: New session 9 of user core. Feb 9 09:44:06.400736 systemd[1]: Started session-9.scope. Feb 9 09:44:06.529733 sshd[3483]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:06.534188 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:52238.service: Deactivated successfully. Feb 9 09:44:06.534984 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:44:06.535871 systemd-logind[1128]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:44:06.539752 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:52252.service. Feb 9 09:44:06.540749 systemd-logind[1128]: Removed session 9. Feb 9 09:44:06.588508 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 52252 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:06.590273 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:06.593968 systemd-logind[1128]: New session 10 of user core. Feb 9 09:44:06.594880 systemd[1]: Started session-10.scope. Feb 9 09:44:07.470479 sshd[3498]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:07.475887 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:52260.service. Feb 9 09:44:07.490871 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:44:07.491622 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:52252.service: Deactivated successfully. Feb 9 09:44:07.492845 systemd-logind[1128]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:44:07.494030 systemd-logind[1128]: Removed session 10. Feb 9 09:44:07.532448 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 52260 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:07.533734 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:07.538348 systemd-logind[1128]: New session 11 of user core. Feb 9 09:44:07.538533 systemd[1]: Started session-11.scope. Feb 9 09:44:07.674796 sshd[3508]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:07.677456 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:52260.service: Deactivated successfully. Feb 9 09:44:07.678183 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:44:07.679196 systemd-logind[1128]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:44:07.679830 systemd-logind[1128]: Removed session 11. Feb 9 09:44:12.680103 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:43078.service. Feb 9 09:44:12.724673 sshd[3524]: Accepted publickey for core from 10.0.0.1 port 43078 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:12.725946 sshd[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:12.730640 systemd-logind[1128]: New session 12 of user core. Feb 9 09:44:12.731037 systemd[1]: Started session-12.scope. Feb 9 09:44:12.868958 sshd[3524]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:12.872956 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:43078.service: Deactivated successfully. Feb 9 09:44:12.873763 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:44:12.874393 systemd-logind[1128]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:44:12.875217 systemd-logind[1128]: Removed session 12. Feb 9 09:44:17.873764 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:43094.service. Feb 9 09:44:17.914612 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 43094 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:17.915905 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:17.920220 systemd-logind[1128]: New session 13 of user core. Feb 9 09:44:17.920270 systemd[1]: Started session-13.scope. Feb 9 09:44:18.035969 sshd[3539]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:18.039949 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:43108.service. Feb 9 09:44:18.040966 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:43094.service: Deactivated successfully. Feb 9 09:44:18.041665 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:44:18.042239 systemd-logind[1128]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:44:18.043047 systemd-logind[1128]: Removed session 13. Feb 9 09:44:18.082329 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 43108 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:18.083769 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:18.087075 systemd-logind[1128]: New session 14 of user core. Feb 9 09:44:18.088050 systemd[1]: Started session-14.scope. Feb 9 09:44:18.286966 sshd[3551]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:18.290598 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:43110.service. Feb 9 09:44:18.291106 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:43108.service: Deactivated successfully. Feb 9 09:44:18.291910 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:44:18.292527 systemd-logind[1128]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:44:18.293578 systemd-logind[1128]: Removed session 14. Feb 9 09:44:18.335753 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 43110 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:18.337770 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:18.343404 systemd[1]: Started session-15.scope. Feb 9 09:44:18.343857 systemd-logind[1128]: New session 15 of user core. Feb 9 09:44:19.076538 sshd[3563]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:19.084981 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:43110.service: Deactivated successfully. Feb 9 09:44:19.085973 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:44:19.086809 systemd-logind[1128]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:44:19.088520 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:43122.service. Feb 9 09:44:19.090016 systemd-logind[1128]: Removed session 15. Feb 9 09:44:19.139630 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 43122 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:19.141187 sshd[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:19.145477 systemd[1]: Started session-16.scope. Feb 9 09:44:19.145767 systemd-logind[1128]: New session 16 of user core. Feb 9 09:44:19.344215 sshd[3596]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:19.346891 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:43134.service. Feb 9 09:44:19.347416 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:43122.service: Deactivated successfully. Feb 9 09:44:19.348366 systemd-logind[1128]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:44:19.348444 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:44:19.349378 systemd-logind[1128]: Removed session 16. Feb 9 09:44:19.389356 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 43134 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:19.390463 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:19.393590 systemd-logind[1128]: New session 17 of user core. Feb 9 09:44:19.394418 systemd[1]: Started session-17.scope. Feb 9 09:44:19.503641 sshd[3645]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:19.506119 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:43134.service: Deactivated successfully. Feb 9 09:44:19.506863 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:44:19.507421 systemd-logind[1128]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:44:19.507995 systemd-logind[1128]: Removed session 17. Feb 9 09:44:24.407422 kubelet[2010]: E0209 09:44:24.407384 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:24.507901 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:41432.service. Feb 9 09:44:24.548839 sshd[3659]: Accepted publickey for core from 10.0.0.1 port 41432 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:24.550593 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:24.554141 systemd-logind[1128]: New session 18 of user core. Feb 9 09:44:24.554984 systemd[1]: Started session-18.scope. Feb 9 09:44:24.663433 sshd[3659]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:24.665912 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:41432.service: Deactivated successfully. Feb 9 09:44:24.666710 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:44:24.667295 systemd-logind[1128]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:44:24.668017 systemd-logind[1128]: Removed session 18. Feb 9 09:44:29.668419 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:41448.service. Feb 9 09:44:29.709142 sshd[3701]: Accepted publickey for core from 10.0.0.1 port 41448 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:29.710464 sshd[3701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:29.713520 systemd-logind[1128]: New session 19 of user core. Feb 9 09:44:29.714339 systemd[1]: Started session-19.scope. Feb 9 09:44:29.818156 sshd[3701]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:29.820490 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:41448.service: Deactivated successfully. Feb 9 09:44:29.821229 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:44:29.821776 systemd-logind[1128]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:44:29.822576 systemd-logind[1128]: Removed session 19. Feb 9 09:44:31.407248 kubelet[2010]: E0209 09:44:31.407215 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:34.822183 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:53528.service. Feb 9 09:44:34.862669 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 53528 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:34.864041 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:34.867566 systemd-logind[1128]: New session 20 of user core. Feb 9 09:44:34.868012 systemd[1]: Started session-20.scope. Feb 9 09:44:34.972725 sshd[3714]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:34.975438 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:53528.service: Deactivated successfully. Feb 9 09:44:34.976190 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:44:34.976763 systemd-logind[1128]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:44:34.977491 systemd-logind[1128]: Removed session 20. Feb 9 09:44:39.977250 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:53530.service. Feb 9 09:44:40.017908 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 53530 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:40.019216 sshd[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:40.022339 systemd-logind[1128]: New session 21 of user core. Feb 9 09:44:40.023247 systemd[1]: Started session-21.scope. Feb 9 09:44:40.126916 sshd[3727]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:40.130519 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:53536.service. Feb 9 09:44:40.131025 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:53530.service: Deactivated successfully. Feb 9 09:44:40.131749 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:44:40.132302 systemd-logind[1128]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:44:40.133002 systemd-logind[1128]: Removed session 21. Feb 9 09:44:40.172564 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 53536 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:40.173663 sshd[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:40.176602 systemd-logind[1128]: New session 22 of user core. Feb 9 09:44:40.177428 systemd[1]: Started session-22.scope. Feb 9 09:44:41.408186 kubelet[2010]: E0209 09:44:41.408158 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:42.140343 env[1142]: time="2024-02-09T09:44:42.140264909Z" level=info msg="StopContainer for \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\" with timeout 30 (s)" Feb 9 09:44:42.141866 env[1142]: time="2024-02-09T09:44:42.141800746Z" level=info msg="Stop container \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\" with signal terminated" Feb 9 09:44:42.151037 systemd[1]: run-containerd-runc-k8s.io-9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c-runc.d2N2gg.mount: Deactivated successfully. Feb 9 09:44:42.159625 systemd[1]: cri-containerd-741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1.scope: Deactivated successfully. Feb 9 09:44:42.172625 env[1142]: time="2024-02-09T09:44:42.172561725Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:44:42.177154 env[1142]: time="2024-02-09T09:44:42.177124636Z" level=info msg="StopContainer for \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\" with timeout 1 (s)" Feb 9 09:44:42.177182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1-rootfs.mount: Deactivated successfully. Feb 9 09:44:42.177463 env[1142]: time="2024-02-09T09:44:42.177440595Z" level=info msg="Stop container \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\" with signal terminated" Feb 9 09:44:42.183760 systemd-networkd[1046]: lxc_health: Link DOWN Feb 9 09:44:42.183766 systemd-networkd[1046]: lxc_health: Lost carrier Feb 9 09:44:42.186035 env[1142]: time="2024-02-09T09:44:42.185998298Z" level=info msg="shim disconnected" id=741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1 Feb 9 09:44:42.186140 env[1142]: time="2024-02-09T09:44:42.186038298Z" level=warning msg="cleaning up after shim disconnected" id=741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1 namespace=k8s.io Feb 9 09:44:42.186140 env[1142]: time="2024-02-09T09:44:42.186051538Z" level=info msg="cleaning up dead shim" Feb 9 09:44:42.193412 env[1142]: time="2024-02-09T09:44:42.193360924Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3793 runtime=io.containerd.runc.v2\n" Feb 9 09:44:42.195880 env[1142]: time="2024-02-09T09:44:42.195844679Z" level=info msg="StopContainer for \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\" returns successfully" Feb 9 09:44:42.196510 env[1142]: time="2024-02-09T09:44:42.196482558Z" level=info msg="StopPodSandbox for \"baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714\"" Feb 9 09:44:42.196662 env[1142]: time="2024-02-09T09:44:42.196640277Z" level=info msg="Container to stop \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.198050 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714-shm.mount: Deactivated successfully. Feb 9 09:44:42.205837 systemd[1]: cri-containerd-baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714.scope: Deactivated successfully. Feb 9 09:44:42.210652 systemd[1]: cri-containerd-9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c.scope: Deactivated successfully. Feb 9 09:44:42.211264 systemd[1]: cri-containerd-9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c.scope: Consumed 6.514s CPU time. Feb 9 09:44:42.230696 env[1142]: time="2024-02-09T09:44:42.230644050Z" level=info msg="shim disconnected" id=baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714 Feb 9 09:44:42.231373 env[1142]: time="2024-02-09T09:44:42.231346889Z" level=warning msg="cleaning up after shim disconnected" id=baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714 namespace=k8s.io Feb 9 09:44:42.231477 env[1142]: time="2024-02-09T09:44:42.231461289Z" level=info msg="cleaning up dead shim" Feb 9 09:44:42.231573 env[1142]: time="2024-02-09T09:44:42.230819370Z" level=info msg="shim disconnected" id=9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c Feb 9 09:44:42.231615 env[1142]: time="2024-02-09T09:44:42.231571248Z" level=warning msg="cleaning up after shim disconnected" id=9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c namespace=k8s.io Feb 9 09:44:42.231615 env[1142]: time="2024-02-09T09:44:42.231581288Z" level=info msg="cleaning up dead shim" Feb 9 09:44:42.238901 env[1142]: time="2024-02-09T09:44:42.238864754Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3839 runtime=io.containerd.runc.v2\n" Feb 9 09:44:42.239347 env[1142]: time="2024-02-09T09:44:42.239317913Z" level=info msg="TearDown network for sandbox \"baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714\" successfully" Feb 9 09:44:42.239664 env[1142]: time="2024-02-09T09:44:42.239639432Z" level=info msg="StopPodSandbox for \"baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714\" returns successfully" Feb 9 09:44:42.239936 env[1142]: time="2024-02-09T09:44:42.239660472Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3838 runtime=io.containerd.runc.v2\n" Feb 9 09:44:42.242317 env[1142]: time="2024-02-09T09:44:42.242231747Z" level=info msg="StopContainer for \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\" returns successfully" Feb 9 09:44:42.242675 env[1142]: time="2024-02-09T09:44:42.242642307Z" level=info msg="StopPodSandbox for \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\"" Feb 9 09:44:42.242830 env[1142]: time="2024-02-09T09:44:42.242702906Z" level=info msg="Container to stop \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.242830 env[1142]: time="2024-02-09T09:44:42.242717866Z" level=info msg="Container to stop \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.242830 env[1142]: time="2024-02-09T09:44:42.242730786Z" level=info msg="Container to stop \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.242830 env[1142]: time="2024-02-09T09:44:42.242743026Z" level=info msg="Container to stop \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.242830 env[1142]: time="2024-02-09T09:44:42.242753826Z" level=info msg="Container to stop \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.247862 systemd[1]: cri-containerd-bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c.scope: Deactivated successfully. Feb 9 09:44:42.271229 env[1142]: time="2024-02-09T09:44:42.271180770Z" level=info msg="shim disconnected" id=bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c Feb 9 09:44:42.271229 env[1142]: time="2024-02-09T09:44:42.271226730Z" level=warning msg="cleaning up after shim disconnected" id=bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c namespace=k8s.io Feb 9 09:44:42.271229 env[1142]: time="2024-02-09T09:44:42.271236690Z" level=info msg="cleaning up dead shim" Feb 9 09:44:42.279500 env[1142]: time="2024-02-09T09:44:42.279452234Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3881 runtime=io.containerd.runc.v2\n" Feb 9 09:44:42.279790 env[1142]: time="2024-02-09T09:44:42.279747353Z" level=info msg="TearDown network for sandbox \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" successfully" Feb 9 09:44:42.279790 env[1142]: time="2024-02-09T09:44:42.279775233Z" level=info msg="StopPodSandbox for \"bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c\" returns successfully" Feb 9 09:44:42.432347 kubelet[2010]: I0209 09:44:42.431258 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-config-path\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432347 kubelet[2010]: I0209 09:44:42.431318 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-cgroup\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432347 kubelet[2010]: I0209 09:44:42.431345 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99b43033-3a61-4c85-a01e-4f12e3a78c40-cilium-config-path\") pod \"99b43033-3a61-4c85-a01e-4f12e3a78c40\" (UID: \"99b43033-3a61-4c85-a01e-4f12e3a78c40\") " Feb 9 09:44:42.432347 kubelet[2010]: I0209 09:44:42.431363 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-xtables-lock\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432347 kubelet[2010]: I0209 09:44:42.431390 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cni-path\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432347 kubelet[2010]: I0209 09:44:42.431409 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-hubble-tls\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432804 kubelet[2010]: I0209 09:44:42.431426 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-etc-cni-netd\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432804 kubelet[2010]: I0209 09:44:42.431442 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-lib-modules\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432804 kubelet[2010]: I0209 09:44:42.431468 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-hostproc\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432804 kubelet[2010]: I0209 09:44:42.431488 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jlq2\" (UniqueName: \"kubernetes.io/projected/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-kube-api-access-5jlq2\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432804 kubelet[2010]: I0209 09:44:42.431507 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwwv4\" (UniqueName: \"kubernetes.io/projected/99b43033-3a61-4c85-a01e-4f12e3a78c40-kube-api-access-jwwv4\") pod \"99b43033-3a61-4c85-a01e-4f12e3a78c40\" (UID: \"99b43033-3a61-4c85-a01e-4f12e3a78c40\") " Feb 9 09:44:42.432804 kubelet[2010]: I0209 09:44:42.431531 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-bpf-maps\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432957 kubelet[2010]: I0209 09:44:42.431553 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-clustermesh-secrets\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432957 kubelet[2010]: I0209 09:44:42.431572 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-host-proc-sys-kernel\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432957 kubelet[2010]: I0209 09:44:42.431591 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-host-proc-sys-net\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432957 kubelet[2010]: I0209 09:44:42.431615 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-run\") pod \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\" (UID: \"537b2d31-c4ca-4f3f-ace3-1c3bf6e38078\") " Feb 9 09:44:42.432957 kubelet[2010]: I0209 09:44:42.432865 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.433069 kubelet[2010]: I0209 09:44:42.432871 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.433155 kubelet[2010]: I0209 09:44:42.433111 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.433215 kubelet[2010]: I0209 09:44:42.433199 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-hostproc" (OuterVolumeSpecName: "hostproc") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.433363 kubelet[2010]: I0209 09:44:42.433325 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.433428 kubelet[2010]: I0209 09:44:42.433410 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.433466 kubelet[2010]: I0209 09:44:42.433448 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.433496 kubelet[2010]: I0209 09:44:42.433467 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.433768 kubelet[2010]: W0209 09:44:42.433725 2010 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/99b43033-3a61-4c85-a01e-4f12e3a78c40/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:44:42.433816 kubelet[2010]: W0209 09:44:42.433762 2010 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:44:42.435852 kubelet[2010]: I0209 09:44:42.435813 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:44:42.435909 kubelet[2010]: I0209 09:44:42.435874 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cni-path" (OuterVolumeSpecName: "cni-path") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.435936 kubelet[2010]: I0209 09:44:42.435918 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:44:42.436094 kubelet[2010]: I0209 09:44:42.436065 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.437308 kubelet[2010]: I0209 09:44:42.436222 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99b43033-3a61-4c85-a01e-4f12e3a78c40-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "99b43033-3a61-4c85-a01e-4f12e3a78c40" (UID: "99b43033-3a61-4c85-a01e-4f12e3a78c40"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:44:42.437829 kubelet[2010]: I0209 09:44:42.437779 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-kube-api-access-5jlq2" (OuterVolumeSpecName: "kube-api-access-5jlq2") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "kube-api-access-5jlq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:42.438193 kubelet[2010]: I0209 09:44:42.438167 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b43033-3a61-4c85-a01e-4f12e3a78c40-kube-api-access-jwwv4" (OuterVolumeSpecName: "kube-api-access-jwwv4") pod "99b43033-3a61-4c85-a01e-4f12e3a78c40" (UID: "99b43033-3a61-4c85-a01e-4f12e3a78c40"). InnerVolumeSpecName "kube-api-access-jwwv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:42.438495 kubelet[2010]: I0209 09:44:42.438448 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" (UID: "537b2d31-c4ca-4f3f-ace3-1c3bf6e38078"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:42.531841 kubelet[2010]: I0209 09:44:42.531784 2010 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.531841 kubelet[2010]: I0209 09:44:42.531825 2010 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.531841 kubelet[2010]: I0209 09:44:42.531835 2010 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99b43033-3a61-4c85-a01e-4f12e3a78c40-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.531841 kubelet[2010]: I0209 09:44:42.531844 2010 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.531841 kubelet[2010]: I0209 09:44:42.531855 2010 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.531841 kubelet[2010]: I0209 09:44:42.531864 2010 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532134 kubelet[2010]: I0209 09:44:42.531874 2010 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532134 kubelet[2010]: I0209 09:44:42.531883 2010 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532134 kubelet[2010]: I0209 09:44:42.531892 2010 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532134 kubelet[2010]: I0209 09:44:42.531905 2010 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jwwv4\" (UniqueName: \"kubernetes.io/projected/99b43033-3a61-4c85-a01e-4f12e3a78c40-kube-api-access-jwwv4\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532134 kubelet[2010]: I0209 09:44:42.531914 2010 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532134 kubelet[2010]: I0209 09:44:42.531923 2010 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532134 kubelet[2010]: I0209 09:44:42.531932 2010 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532134 kubelet[2010]: I0209 09:44:42.531941 2010 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-5jlq2\" (UniqueName: \"kubernetes.io/projected/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-kube-api-access-5jlq2\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532391 kubelet[2010]: I0209 09:44:42.531949 2010 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.532391 kubelet[2010]: I0209 09:44:42.531958 2010 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:42.595218 kubelet[2010]: I0209 09:44:42.595191 2010 scope.go:115] "RemoveContainer" containerID="9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c" Feb 9 09:44:42.597511 env[1142]: time="2024-02-09T09:44:42.597463366Z" level=info msg="RemoveContainer for \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\"" Feb 9 09:44:42.600504 systemd[1]: Removed slice kubepods-burstable-pod537b2d31_c4ca_4f3f_ace3_1c3bf6e38078.slice. Feb 9 09:44:42.600585 systemd[1]: kubepods-burstable-pod537b2d31_c4ca_4f3f_ace3_1c3bf6e38078.slice: Consumed 6.730s CPU time. Feb 9 09:44:42.601949 env[1142]: time="2024-02-09T09:44:42.601913437Z" level=info msg="RemoveContainer for \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\" returns successfully" Feb 9 09:44:42.602128 kubelet[2010]: I0209 09:44:42.602105 2010 scope.go:115] "RemoveContainer" containerID="64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900" Feb 9 09:44:42.603096 env[1142]: time="2024-02-09T09:44:42.603069155Z" level=info msg="RemoveContainer for \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\"" Feb 9 09:44:42.605053 systemd[1]: Removed slice kubepods-besteffort-pod99b43033_3a61_4c85_a01e_4f12e3a78c40.slice. Feb 9 09:44:42.605396 env[1142]: time="2024-02-09T09:44:42.605369990Z" level=info msg="RemoveContainer for \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\" returns successfully" Feb 9 09:44:42.605605 kubelet[2010]: I0209 09:44:42.605584 2010 scope.go:115] "RemoveContainer" containerID="f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056" Feb 9 09:44:42.606463 env[1142]: time="2024-02-09T09:44:42.606409228Z" level=info msg="RemoveContainer for \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\"" Feb 9 09:44:42.608324 env[1142]: time="2024-02-09T09:44:42.608266064Z" level=info msg="RemoveContainer for \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\" returns successfully" Feb 9 09:44:42.608444 kubelet[2010]: I0209 09:44:42.608427 2010 scope.go:115] "RemoveContainer" containerID="ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209" Feb 9 09:44:42.609297 env[1142]: time="2024-02-09T09:44:42.609254542Z" level=info msg="RemoveContainer for \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\"" Feb 9 09:44:42.612163 env[1142]: time="2024-02-09T09:44:42.612091377Z" level=info msg="RemoveContainer for \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\" returns successfully" Feb 9 09:44:42.612396 kubelet[2010]: I0209 09:44:42.612373 2010 scope.go:115] "RemoveContainer" containerID="3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5" Feb 9 09:44:42.615111 env[1142]: time="2024-02-09T09:44:42.615077931Z" level=info msg="RemoveContainer for \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\"" Feb 9 09:44:42.617276 env[1142]: time="2024-02-09T09:44:42.617242207Z" level=info msg="RemoveContainer for \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\" returns successfully" Feb 9 09:44:42.617448 kubelet[2010]: I0209 09:44:42.617418 2010 scope.go:115] "RemoveContainer" containerID="9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c" Feb 9 09:44:42.618399 env[1142]: time="2024-02-09T09:44:42.617774446Z" level=error msg="ContainerStatus for \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\": not found" Feb 9 09:44:42.619047 kubelet[2010]: E0209 09:44:42.618866 2010 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\": not found" containerID="9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c" Feb 9 09:44:42.619047 kubelet[2010]: I0209 09:44:42.618912 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c} err="failed to get container status \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c\": not found" Feb 9 09:44:42.619047 kubelet[2010]: I0209 09:44:42.618967 2010 scope.go:115] "RemoveContainer" containerID="64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900" Feb 9 09:44:42.619353 env[1142]: time="2024-02-09T09:44:42.619160323Z" level=error msg="ContainerStatus for \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\": not found" Feb 9 09:44:42.619422 kubelet[2010]: E0209 09:44:42.619405 2010 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\": not found" containerID="64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900" Feb 9 09:44:42.619470 kubelet[2010]: I0209 09:44:42.619433 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900} err="failed to get container status \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\": rpc error: code = NotFound desc = an error occurred when try to find container \"64f7a0984820b2a0d45fa9ff7b58c644de99a33390723ff633e07a27925d5900\": not found" Feb 9 09:44:42.619470 kubelet[2010]: I0209 09:44:42.619444 2010 scope.go:115] "RemoveContainer" containerID="f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056" Feb 9 09:44:42.622118 env[1142]: time="2024-02-09T09:44:42.619681442Z" level=error msg="ContainerStatus for \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\": not found" Feb 9 09:44:42.622118 env[1142]: time="2024-02-09T09:44:42.620032681Z" level=error msg="ContainerStatus for \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\": not found" Feb 9 09:44:42.622118 env[1142]: time="2024-02-09T09:44:42.620372001Z" level=error msg="ContainerStatus for \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\": not found" Feb 9 09:44:42.622118 env[1142]: time="2024-02-09T09:44:42.621751918Z" level=info msg="RemoveContainer for \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\"" Feb 9 09:44:42.622268 kubelet[2010]: E0209 09:44:42.619841 2010 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\": not found" containerID="f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056" Feb 9 09:44:42.622268 kubelet[2010]: I0209 09:44:42.619864 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056} err="failed to get container status \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\": rpc error: code = NotFound desc = an error occurred when try to find container \"f147ca36d6ba3686d0f8ea4b69176509d1eed770a96c9e0f7a3381d244c21056\": not found" Feb 9 09:44:42.622268 kubelet[2010]: I0209 09:44:42.619873 2010 scope.go:115] "RemoveContainer" containerID="ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209" Feb 9 09:44:42.622268 kubelet[2010]: E0209 09:44:42.620176 2010 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\": not found" containerID="ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209" Feb 9 09:44:42.622268 kubelet[2010]: I0209 09:44:42.620201 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209} err="failed to get container status \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce928ca6e063edd2c47481796d14bd7d294fed22b67f176836987903dadbf209\": not found" Feb 9 09:44:42.622268 kubelet[2010]: I0209 09:44:42.620212 2010 scope.go:115] "RemoveContainer" containerID="3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5" Feb 9 09:44:42.622419 kubelet[2010]: E0209 09:44:42.620505 2010 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\": not found" containerID="3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5" Feb 9 09:44:42.622419 kubelet[2010]: I0209 09:44:42.620531 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5} err="failed to get container status \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bf625972650cd79e8c48b9adda240d200130311f3f1570aabd1d91ed3a684b5\": not found" Feb 9 09:44:42.622419 kubelet[2010]: I0209 09:44:42.620541 2010 scope.go:115] "RemoveContainer" containerID="741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1" Feb 9 09:44:42.626622 env[1142]: time="2024-02-09T09:44:42.624082793Z" level=info msg="RemoveContainer for \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\" returns successfully" Feb 9 09:44:42.626834 kubelet[2010]: I0209 09:44:42.626811 2010 scope.go:115] "RemoveContainer" containerID="741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1" Feb 9 09:44:42.627135 env[1142]: time="2024-02-09T09:44:42.627086227Z" level=error msg="ContainerStatus for \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\": not found" Feb 9 09:44:42.627251 kubelet[2010]: E0209 09:44:42.627234 2010 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\": not found" containerID="741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1" Feb 9 09:44:42.627299 kubelet[2010]: I0209 09:44:42.627266 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1} err="failed to get container status \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"741583282f162358d27e3390233795a6372bc20d57f96a1bb0732f91908ad5a1\": not found" Feb 9 09:44:43.146374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b5a9e5fb30a9b808d981f8fcbe54ff9f565d77ffe433af3122394642d3f671c-rootfs.mount: Deactivated successfully. Feb 9 09:44:43.146483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c-rootfs.mount: Deactivated successfully. Feb 9 09:44:43.146538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb21aa8b0c4938fd8b7d882532c5cf518125dde8a23d94d1f6d33ec16544049c-shm.mount: Deactivated successfully. Feb 9 09:44:43.146596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baa8d46bc7ff0856fdff5990342343d51b451209fd22a744c8ce854247832714-rootfs.mount: Deactivated successfully. Feb 9 09:44:43.146654 systemd[1]: var-lib-kubelet-pods-537b2d31\x2dc4ca\x2d4f3f\x2dace3\x2d1c3bf6e38078-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5jlq2.mount: Deactivated successfully. Feb 9 09:44:43.146708 systemd[1]: var-lib-kubelet-pods-99b43033\x2d3a61\x2d4c85\x2da01e\x2d4f12e3a78c40-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djwwv4.mount: Deactivated successfully. Feb 9 09:44:43.146764 systemd[1]: var-lib-kubelet-pods-537b2d31\x2dc4ca\x2d4f3f\x2dace3\x2d1c3bf6e38078-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:44:43.146824 systemd[1]: var-lib-kubelet-pods-537b2d31\x2dc4ca\x2d4f3f\x2dace3\x2d1c3bf6e38078-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:44:43.409768 kubelet[2010]: I0209 09:44:43.409676 2010 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=537b2d31-c4ca-4f3f-ace3-1c3bf6e38078 path="/var/lib/kubelet/pods/537b2d31-c4ca-4f3f-ace3-1c3bf6e38078/volumes" Feb 9 09:44:43.410273 kubelet[2010]: I0209 09:44:43.410228 2010 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=99b43033-3a61-4c85-a01e-4f12e3a78c40 path="/var/lib/kubelet/pods/99b43033-3a61-4c85-a01e-4f12e3a78c40/volumes" Feb 9 09:44:44.075164 sshd[3739]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:44.078430 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:47844.service. Feb 9 09:44:44.080276 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:53536.service: Deactivated successfully. Feb 9 09:44:44.080980 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:44:44.081149 systemd[1]: session-22.scope: Consumed 1.249s CPU time. Feb 9 09:44:44.081569 systemd-logind[1128]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:44:44.082287 systemd-logind[1128]: Removed session 22. Feb 9 09:44:44.121078 sshd[3899]: Accepted publickey for core from 10.0.0.1 port 47844 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:44.122106 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:44.125006 systemd-logind[1128]: New session 23 of user core. Feb 9 09:44:44.125762 systemd[1]: Started session-23.scope. Feb 9 09:44:45.296601 sshd[3899]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:45.300086 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:47858.service. Feb 9 09:44:45.301936 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:47844.service: Deactivated successfully. Feb 9 09:44:45.302617 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:44:45.302785 systemd[1]: session-23.scope: Consumed 1.087s CPU time. Feb 9 09:44:45.304041 kubelet[2010]: I0209 09:44:45.304008 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:45.304525 kubelet[2010]: E0209 09:44:45.304506 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" containerName="cilium-agent" Feb 9 09:44:45.304894 kubelet[2010]: E0209 09:44:45.304878 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99b43033-3a61-4c85-a01e-4f12e3a78c40" containerName="cilium-operator" Feb 9 09:44:45.305093 kubelet[2010]: E0209 09:44:45.305080 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" containerName="mount-cgroup" Feb 9 09:44:45.305451 kubelet[2010]: E0209 09:44:45.305428 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" containerName="clean-cilium-state" Feb 9 09:44:45.305483 systemd-logind[1128]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:44:45.306381 kubelet[2010]: E0209 09:44:45.306363 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" containerName="apply-sysctl-overwrites" Feb 9 09:44:45.306546 kubelet[2010]: E0209 09:44:45.306533 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" containerName="mount-bpf-fs" Feb 9 09:44:45.306832 kubelet[2010]: I0209 09:44:45.306816 2010 memory_manager.go:346] "RemoveStaleState removing state" podUID="537b2d31-c4ca-4f3f-ace3-1c3bf6e38078" containerName="cilium-agent" Feb 9 09:44:45.307047 systemd-logind[1128]: Removed session 23. Feb 9 09:44:45.307262 kubelet[2010]: I0209 09:44:45.307247 2010 memory_manager.go:346] "RemoveStaleState removing state" podUID="99b43033-3a61-4c85-a01e-4f12e3a78c40" containerName="cilium-operator" Feb 9 09:44:45.318213 systemd[1]: Created slice kubepods-burstable-pod64139ca3_55d1_479b_a658_e10a082e57d1.slice. Feb 9 09:44:45.354224 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 47858 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:45.355569 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:45.359348 systemd-logind[1128]: New session 24 of user core. Feb 9 09:44:45.359820 systemd[1]: Started session-24.scope. Feb 9 09:44:45.444149 kubelet[2010]: I0209 09:44:45.444106 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-hostproc\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444149 kubelet[2010]: I0209 09:44:45.444151 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-config-path\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444320 kubelet[2010]: I0209 09:44:45.444175 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-ipsec-secrets\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444320 kubelet[2010]: I0209 09:44:45.444231 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-etc-cni-netd\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444320 kubelet[2010]: I0209 09:44:45.444311 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cni-path\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444396 kubelet[2010]: I0209 09:44:45.444356 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-lib-modules\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444421 kubelet[2010]: I0209 09:44:45.444404 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-run\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444444 kubelet[2010]: I0209 09:44:45.444435 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64139ca3-55d1-479b-a658-e10a082e57d1-clustermesh-secrets\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444466 kubelet[2010]: I0209 09:44:45.444455 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-host-proc-sys-net\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444489 kubelet[2010]: I0209 09:44:45.444482 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-cgroup\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444544 kubelet[2010]: I0209 09:44:45.444526 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-bpf-maps\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444576 kubelet[2010]: I0209 09:44:45.444553 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64139ca3-55d1-479b-a658-e10a082e57d1-hubble-tls\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444600 kubelet[2010]: I0209 09:44:45.444580 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45fcz\" (UniqueName: \"kubernetes.io/projected/64139ca3-55d1-479b-a658-e10a082e57d1-kube-api-access-45fcz\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444626 kubelet[2010]: I0209 09:44:45.444602 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-xtables-lock\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.444626 kubelet[2010]: I0209 09:44:45.444621 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-host-proc-sys-kernel\") pod \"cilium-d285v\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " pod="kube-system/cilium-d285v" Feb 9 09:44:45.483746 sshd[3911]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:45.486117 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:47862.service. Feb 9 09:44:45.492559 systemd-logind[1128]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:44:45.492678 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:47858.service: Deactivated successfully. Feb 9 09:44:45.493375 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:44:45.494168 systemd-logind[1128]: Removed session 24. Feb 9 09:44:45.528797 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 47862 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:45.530108 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:45.533082 systemd-logind[1128]: New session 25 of user core. Feb 9 09:44:45.533996 systemd[1]: Started session-25.scope. Feb 9 09:44:45.622760 kubelet[2010]: E0209 09:44:45.622671 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:45.623677 env[1142]: time="2024-02-09T09:44:45.623277576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d285v,Uid:64139ca3-55d1-479b-a658-e10a082e57d1,Namespace:kube-system,Attempt:0,}" Feb 9 09:44:45.643988 env[1142]: time="2024-02-09T09:44:45.640136427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:45.643988 env[1142]: time="2024-02-09T09:44:45.640187507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:45.643988 env[1142]: time="2024-02-09T09:44:45.640199267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:45.643988 env[1142]: time="2024-02-09T09:44:45.640711787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b pid=3946 runtime=io.containerd.runc.v2 Feb 9 09:44:45.656676 systemd[1]: Started cri-containerd-487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b.scope. Feb 9 09:44:45.695083 env[1142]: time="2024-02-09T09:44:45.695042023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d285v,Uid:64139ca3-55d1-479b-a658-e10a082e57d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b\"" Feb 9 09:44:45.696138 kubelet[2010]: E0209 09:44:45.695669 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:45.698863 env[1142]: time="2024-02-09T09:44:45.698827825Z" level=info msg="CreateContainer within sandbox \"487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:44:45.708759 env[1142]: time="2024-02-09T09:44:45.708714232Z" level=info msg="CreateContainer within sandbox \"487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8\"" Feb 9 09:44:45.709382 env[1142]: time="2024-02-09T09:44:45.709355512Z" level=info msg="StartContainer for \"6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8\"" Feb 9 09:44:45.729203 systemd[1]: Started cri-containerd-6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8.scope. Feb 9 09:44:45.747425 systemd[1]: cri-containerd-6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8.scope: Deactivated successfully. Feb 9 09:44:45.767360 env[1142]: time="2024-02-09T09:44:45.767305230Z" level=info msg="shim disconnected" id=6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8 Feb 9 09:44:45.767360 env[1142]: time="2024-02-09T09:44:45.767359750Z" level=warning msg="cleaning up after shim disconnected" id=6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8 namespace=k8s.io Feb 9 09:44:45.767580 env[1142]: time="2024-02-09T09:44:45.767369630Z" level=info msg="cleaning up dead shim" Feb 9 09:44:45.774669 env[1142]: time="2024-02-09T09:44:45.774610395Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4005 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:44:45Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 09:44:45.774962 env[1142]: time="2024-02-09T09:44:45.774866435Z" level=error msg="copy shim log" error="read /proc/self/fd/27: file already closed" Feb 9 09:44:45.775129 env[1142]: time="2024-02-09T09:44:45.775088355Z" level=error msg="Failed to pipe stdout of container \"6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8\"" error="reading from a closed fifo" Feb 9 09:44:45.775184 env[1142]: time="2024-02-09T09:44:45.775163835Z" level=error msg="Failed to pipe stderr of container \"6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8\"" error="reading from a closed fifo" Feb 9 09:44:45.776873 env[1142]: time="2024-02-09T09:44:45.776818637Z" level=error msg="StartContainer for \"6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 09:44:45.777050 kubelet[2010]: E0209 09:44:45.777032 2010 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8" Feb 9 09:44:45.777328 kubelet[2010]: E0209 09:44:45.777312 2010 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 09:44:45.777328 kubelet[2010]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 09:44:45.777328 kubelet[2010]: rm /hostbin/cilium-mount Feb 9 09:44:45.777328 kubelet[2010]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-45fcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-d285v_kube-system(64139ca3-55d1-479b-a658-e10a082e57d1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 09:44:45.777783 kubelet[2010]: E0209 09:44:45.777355 2010 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-d285v" podUID=64139ca3-55d1-479b-a658-e10a082e57d1 Feb 9 09:44:46.464468 kubelet[2010]: E0209 09:44:46.464440 2010 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:44:46.608357 env[1142]: time="2024-02-09T09:44:46.608313801Z" level=info msg="StopPodSandbox for \"487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b\"" Feb 9 09:44:46.608516 env[1142]: time="2024-02-09T09:44:46.608373401Z" level=info msg="Container to stop \"6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:46.609763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b-shm.mount: Deactivated successfully. Feb 9 09:44:46.617605 systemd[1]: cri-containerd-487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b.scope: Deactivated successfully. Feb 9 09:44:46.639379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b-rootfs.mount: Deactivated successfully. Feb 9 09:44:46.644636 env[1142]: time="2024-02-09T09:44:46.644591655Z" level=info msg="shim disconnected" id=487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b Feb 9 09:44:46.645030 env[1142]: time="2024-02-09T09:44:46.645007895Z" level=warning msg="cleaning up after shim disconnected" id=487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b namespace=k8s.io Feb 9 09:44:46.645142 env[1142]: time="2024-02-09T09:44:46.645125576Z" level=info msg="cleaning up dead shim" Feb 9 09:44:46.652324 env[1142]: time="2024-02-09T09:44:46.652274746Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4037 runtime=io.containerd.runc.v2\n" Feb 9 09:44:46.652745 env[1142]: time="2024-02-09T09:44:46.652705667Z" level=info msg="TearDown network for sandbox \"487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b\" successfully" Feb 9 09:44:46.652861 env[1142]: time="2024-02-09T09:44:46.652839107Z" level=info msg="StopPodSandbox for \"487b9b40a92ed3979bf61a5a7c2cfdfb64e8a8385531d0d72612c1727656a30b\" returns successfully" Feb 9 09:44:46.853364 kubelet[2010]: I0209 09:44:46.853317 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-run\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853364 kubelet[2010]: I0209 09:44:46.853366 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45fcz\" (UniqueName: \"kubernetes.io/projected/64139ca3-55d1-479b-a658-e10a082e57d1-kube-api-access-45fcz\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853580 kubelet[2010]: I0209 09:44:46.853395 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-host-proc-sys-kernel\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853580 kubelet[2010]: I0209 09:44:46.853416 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-lib-modules\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853580 kubelet[2010]: I0209 09:44:46.853441 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64139ca3-55d1-479b-a658-e10a082e57d1-clustermesh-secrets\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853580 kubelet[2010]: I0209 09:44:46.853466 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-host-proc-sys-net\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853580 kubelet[2010]: I0209 09:44:46.853488 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-ipsec-secrets\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853580 kubelet[2010]: I0209 09:44:46.853486 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.853726 kubelet[2010]: I0209 09:44:46.853505 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cni-path\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853726 kubelet[2010]: I0209 09:44:46.853537 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cni-path" (OuterVolumeSpecName: "cni-path") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.853726 kubelet[2010]: I0209 09:44:46.853564 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.853726 kubelet[2010]: I0209 09:44:46.853602 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.853726 kubelet[2010]: I0209 09:44:46.853629 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.853923 kubelet[2010]: I0209 09:44:46.853790 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-bpf-maps\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853923 kubelet[2010]: I0209 09:44:46.853816 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-etc-cni-netd\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853923 kubelet[2010]: I0209 09:44:46.853833 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-hostproc\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853923 kubelet[2010]: I0209 09:44:46.853857 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-config-path\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853923 kubelet[2010]: I0209 09:44:46.853875 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-cgroup\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.853923 kubelet[2010]: I0209 09:44:46.853874 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.854070 kubelet[2010]: I0209 09:44:46.853892 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-xtables-lock\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.854070 kubelet[2010]: I0209 09:44:46.853907 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.854070 kubelet[2010]: I0209 09:44:46.853914 2010 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64139ca3-55d1-479b-a658-e10a082e57d1-hubble-tls\") pod \"64139ca3-55d1-479b-a658-e10a082e57d1\" (UID: \"64139ca3-55d1-479b-a658-e10a082e57d1\") " Feb 9 09:44:46.854070 kubelet[2010]: I0209 09:44:46.853945 2010 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.854070 kubelet[2010]: I0209 09:44:46.853956 2010 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.854070 kubelet[2010]: I0209 09:44:46.853960 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.854206 kubelet[2010]: I0209 09:44:46.853966 2010 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.854206 kubelet[2010]: I0209 09:44:46.853990 2010 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.854206 kubelet[2010]: I0209 09:44:46.854002 2010 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.854206 kubelet[2010]: I0209 09:44:46.854020 2010 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.854206 kubelet[2010]: I0209 09:44:46.854030 2010 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.854206 kubelet[2010]: I0209 09:44:46.854151 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-hostproc" (OuterVolumeSpecName: "hostproc") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.854206 kubelet[2010]: I0209 09:44:46.854183 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.854419 kubelet[2010]: W0209 09:44:46.854391 2010 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/64139ca3-55d1-479b-a658-e10a082e57d1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:44:46.857499 systemd[1]: var-lib-kubelet-pods-64139ca3\x2d55d1\x2d479b\x2da658\x2de10a082e57d1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:44:46.857604 systemd[1]: var-lib-kubelet-pods-64139ca3\x2d55d1\x2d479b\x2da658\x2de10a082e57d1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:44:46.858025 kubelet[2010]: I0209 09:44:46.856160 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:44:46.858644 kubelet[2010]: I0209 09:44:46.858611 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64139ca3-55d1-479b-a658-e10a082e57d1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:44:46.858718 kubelet[2010]: I0209 09:44:46.858680 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64139ca3-55d1-479b-a658-e10a082e57d1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:46.858785 kubelet[2010]: I0209 09:44:46.858622 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64139ca3-55d1-479b-a658-e10a082e57d1-kube-api-access-45fcz" (OuterVolumeSpecName: "kube-api-access-45fcz") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "kube-api-access-45fcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:46.859073 kubelet[2010]: I0209 09:44:46.858931 2010 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "64139ca3-55d1-479b-a658-e10a082e57d1" (UID: "64139ca3-55d1-479b-a658-e10a082e57d1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:44:46.859310 systemd[1]: var-lib-kubelet-pods-64139ca3\x2d55d1\x2d479b\x2da658\x2de10a082e57d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d45fcz.mount: Deactivated successfully. Feb 9 09:44:46.859396 systemd[1]: var-lib-kubelet-pods-64139ca3\x2d55d1\x2d479b\x2da658\x2de10a082e57d1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:44:46.954822 kubelet[2010]: I0209 09:44:46.954789 2010 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.955009 kubelet[2010]: I0209 09:44:46.954997 2010 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64139ca3-55d1-479b-a658-e10a082e57d1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.955087 kubelet[2010]: I0209 09:44:46.955077 2010 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.955147 kubelet[2010]: I0209 09:44:46.955138 2010 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.955200 kubelet[2010]: I0209 09:44:46.955191 2010 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64139ca3-55d1-479b-a658-e10a082e57d1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.955254 kubelet[2010]: I0209 09:44:46.955244 2010 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.955336 kubelet[2010]: I0209 09:44:46.955326 2010 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64139ca3-55d1-479b-a658-e10a082e57d1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:46.955405 kubelet[2010]: I0209 09:44:46.955396 2010 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-45fcz\" (UniqueName: \"kubernetes.io/projected/64139ca3-55d1-479b-a658-e10a082e57d1-kube-api-access-45fcz\") on node \"localhost\" DevicePath \"\"" Feb 9 09:44:47.412081 systemd[1]: Removed slice kubepods-burstable-pod64139ca3_55d1_479b_a658_e10a082e57d1.slice. Feb 9 09:44:47.611451 kubelet[2010]: I0209 09:44:47.611421 2010 scope.go:115] "RemoveContainer" containerID="6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8" Feb 9 09:44:47.612539 env[1142]: time="2024-02-09T09:44:47.612499165Z" level=info msg="RemoveContainer for \"6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8\"" Feb 9 09:44:47.615893 env[1142]: time="2024-02-09T09:44:47.615861013Z" level=info msg="RemoveContainer for \"6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8\" returns successfully" Feb 9 09:44:47.635691 kubelet[2010]: I0209 09:44:47.635666 2010 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:47.635807 kubelet[2010]: E0209 09:44:47.635713 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64139ca3-55d1-479b-a658-e10a082e57d1" containerName="mount-cgroup" Feb 9 09:44:47.635807 kubelet[2010]: I0209 09:44:47.635741 2010 memory_manager.go:346] "RemoveStaleState removing state" podUID="64139ca3-55d1-479b-a658-e10a082e57d1" containerName="mount-cgroup" Feb 9 09:44:47.640821 systemd[1]: Created slice kubepods-burstable-pod196aed5f_ce23_46ce_ab7d_bb04c81b493f.slice. Feb 9 09:44:47.659366 kubelet[2010]: I0209 09:44:47.659332 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-bpf-maps\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.659572 kubelet[2010]: I0209 09:44:47.659559 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/196aed5f-ce23-46ce-ab7d-bb04c81b493f-cilium-ipsec-secrets\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.659684 kubelet[2010]: I0209 09:44:47.659673 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-host-proc-sys-net\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.659783 kubelet[2010]: I0209 09:44:47.659766 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/196aed5f-ce23-46ce-ab7d-bb04c81b493f-hubble-tls\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.659882 kubelet[2010]: I0209 09:44:47.659871 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-cilium-run\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.659965 kubelet[2010]: I0209 09:44:47.659955 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-xtables-lock\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660061 kubelet[2010]: I0209 09:44:47.660050 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/196aed5f-ce23-46ce-ab7d-bb04c81b493f-cilium-config-path\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660155 kubelet[2010]: I0209 09:44:47.660145 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-etc-cni-netd\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660238 kubelet[2010]: I0209 09:44:47.660228 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/196aed5f-ce23-46ce-ab7d-bb04c81b493f-clustermesh-secrets\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660330 kubelet[2010]: I0209 09:44:47.660319 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc2jw\" (UniqueName: \"kubernetes.io/projected/196aed5f-ce23-46ce-ab7d-bb04c81b493f-kube-api-access-sc2jw\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660425 kubelet[2010]: I0209 09:44:47.660415 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-hostproc\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660526 kubelet[2010]: I0209 09:44:47.660515 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-cni-path\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660624 kubelet[2010]: I0209 09:44:47.660614 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-lib-modules\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660758 kubelet[2010]: I0209 09:44:47.660732 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-host-proc-sys-kernel\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.660890 kubelet[2010]: I0209 09:44:47.660861 2010 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/196aed5f-ce23-46ce-ab7d-bb04c81b493f-cilium-cgroup\") pod \"cilium-7mzkw\" (UID: \"196aed5f-ce23-46ce-ab7d-bb04c81b493f\") " pod="kube-system/cilium-7mzkw" Feb 9 09:44:47.943276 kubelet[2010]: E0209 09:44:47.943237 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:47.943800 env[1142]: time="2024-02-09T09:44:47.943751879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mzkw,Uid:196aed5f-ce23-46ce-ab7d-bb04c81b493f,Namespace:kube-system,Attempt:0,}" Feb 9 09:44:47.959429 env[1142]: time="2024-02-09T09:44:47.959357795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:47.959429 env[1142]: time="2024-02-09T09:44:47.959400715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:47.959429 env[1142]: time="2024-02-09T09:44:47.959411275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:47.959619 env[1142]: time="2024-02-09T09:44:47.959577675Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39 pid=4068 runtime=io.containerd.runc.v2 Feb 9 09:44:47.973868 systemd[1]: Started cri-containerd-cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39.scope. Feb 9 09:44:48.008406 env[1142]: time="2024-02-09T09:44:48.008353832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mzkw,Uid:196aed5f-ce23-46ce-ab7d-bb04c81b493f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\"" Feb 9 09:44:48.009070 kubelet[2010]: E0209 09:44:48.009052 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:48.010898 env[1142]: time="2024-02-09T09:44:48.010866520Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:44:48.021395 env[1142]: time="2024-02-09T09:44:48.021336752Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861\"" Feb 9 09:44:48.021923 env[1142]: time="2024-02-09T09:44:48.021882273Z" level=info msg="StartContainer for \"add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861\"" Feb 9 09:44:48.036314 systemd[1]: Started cri-containerd-add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861.scope. Feb 9 09:44:48.070584 env[1142]: time="2024-02-09T09:44:48.070534582Z" level=info msg="StartContainer for \"add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861\" returns successfully" Feb 9 09:44:48.079828 systemd[1]: cri-containerd-add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861.scope: Deactivated successfully. Feb 9 09:44:48.103077 env[1142]: time="2024-02-09T09:44:48.103031481Z" level=info msg="shim disconnected" id=add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861 Feb 9 09:44:48.103077 env[1142]: time="2024-02-09T09:44:48.103080601Z" level=warning msg="cleaning up after shim disconnected" id=add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861 namespace=k8s.io Feb 9 09:44:48.103303 env[1142]: time="2024-02-09T09:44:48.103090801Z" level=info msg="cleaning up dead shim" Feb 9 09:44:48.110000 env[1142]: time="2024-02-09T09:44:48.109961182Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4152 runtime=io.containerd.runc.v2\n" Feb 9 09:44:48.407223 kubelet[2010]: E0209 09:44:48.407082 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:48.615516 kubelet[2010]: E0209 09:44:48.615351 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:48.619398 env[1142]: time="2024-02-09T09:44:48.617234489Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:44:48.628970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549058573.mount: Deactivated successfully. Feb 9 09:44:48.634674 env[1142]: time="2024-02-09T09:44:48.634627182Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6\"" Feb 9 09:44:48.635164 env[1142]: time="2024-02-09T09:44:48.635140384Z" level=info msg="StartContainer for \"3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6\"" Feb 9 09:44:48.651978 systemd[1]: Started cri-containerd-3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6.scope. Feb 9 09:44:48.684541 env[1142]: time="2024-02-09T09:44:48.684443214Z" level=info msg="StartContainer for \"3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6\" returns successfully" Feb 9 09:44:48.687738 systemd[1]: cri-containerd-3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6.scope: Deactivated successfully. Feb 9 09:44:48.708793 env[1142]: time="2024-02-09T09:44:48.708744089Z" level=info msg="shim disconnected" id=3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6 Feb 9 09:44:48.708976 env[1142]: time="2024-02-09T09:44:48.708793049Z" level=warning msg="cleaning up after shim disconnected" id=3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6 namespace=k8s.io Feb 9 09:44:48.708976 env[1142]: time="2024-02-09T09:44:48.708804329Z" level=info msg="cleaning up dead shim" Feb 9 09:44:48.715880 env[1142]: time="2024-02-09T09:44:48.715842430Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4214 runtime=io.containerd.runc.v2\n" Feb 9 09:44:48.872604 kubelet[2010]: W0209 09:44:48.872565 2010 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64139ca3_55d1_479b_a658_e10a082e57d1.slice/cri-containerd-6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8.scope WatchSource:0}: container "6320a8745bc4c04211fb7115ad983c66bf7b38048eda230705860c7d42c8ffd8" in namespace "k8s.io": not found Feb 9 09:44:49.409939 kubelet[2010]: I0209 09:44:49.409910 2010 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=64139ca3-55d1-479b-a658-e10a082e57d1 path="/var/lib/kubelet/pods/64139ca3-55d1-479b-a658-e10a082e57d1/volumes" Feb 9 09:44:49.550690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6-rootfs.mount: Deactivated successfully. Feb 9 09:44:49.618439 kubelet[2010]: E0209 09:44:49.618413 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:49.621540 env[1142]: time="2024-02-09T09:44:49.621481577Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:44:49.631601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962190455.mount: Deactivated successfully. Feb 9 09:44:49.635640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664140335.mount: Deactivated successfully. Feb 9 09:44:49.639594 env[1142]: time="2024-02-09T09:44:49.639541925Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205\"" Feb 9 09:44:49.640009 env[1142]: time="2024-02-09T09:44:49.639979487Z" level=info msg="StartContainer for \"e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205\"" Feb 9 09:44:49.657702 systemd[1]: Started cri-containerd-e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205.scope. Feb 9 09:44:49.691784 systemd[1]: cri-containerd-e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205.scope: Deactivated successfully. Feb 9 09:44:49.695763 env[1142]: time="2024-02-09T09:44:49.695714339Z" level=info msg="StartContainer for \"e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205\" returns successfully" Feb 9 09:44:49.722268 env[1142]: time="2024-02-09T09:44:49.722221639Z" level=info msg="shim disconnected" id=e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205 Feb 9 09:44:49.722517 env[1142]: time="2024-02-09T09:44:49.722495921Z" level=warning msg="cleaning up after shim disconnected" id=e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205 namespace=k8s.io Feb 9 09:44:49.722594 env[1142]: time="2024-02-09T09:44:49.722581041Z" level=info msg="cleaning up dead shim" Feb 9 09:44:49.729502 env[1142]: time="2024-02-09T09:44:49.729469307Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4273 runtime=io.containerd.runc.v2\n" Feb 9 09:44:50.622071 kubelet[2010]: E0209 09:44:50.622045 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:50.625286 env[1142]: time="2024-02-09T09:44:50.624614479Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:44:50.636080 env[1142]: time="2024-02-09T09:44:50.635368087Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14\"" Feb 9 09:44:50.641124 env[1142]: time="2024-02-09T09:44:50.641083593Z" level=info msg="StartContainer for \"6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14\"" Feb 9 09:44:50.661458 systemd[1]: Started cri-containerd-6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14.scope. Feb 9 09:44:50.689386 systemd[1]: cri-containerd-6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14.scope: Deactivated successfully. Feb 9 09:44:50.690907 env[1142]: time="2024-02-09T09:44:50.690836138Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196aed5f_ce23_46ce_ab7d_bb04c81b493f.slice/cri-containerd-6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14.scope/memory.events\": no such file or directory" Feb 9 09:44:50.692569 env[1142]: time="2024-02-09T09:44:50.692528546Z" level=info msg="StartContainer for \"6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14\" returns successfully" Feb 9 09:44:50.712565 env[1142]: time="2024-02-09T09:44:50.712516956Z" level=info msg="shim disconnected" id=6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14 Feb 9 09:44:50.712841 env[1142]: time="2024-02-09T09:44:50.712819958Z" level=warning msg="cleaning up after shim disconnected" id=6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14 namespace=k8s.io Feb 9 09:44:50.712914 env[1142]: time="2024-02-09T09:44:50.712899558Z" level=info msg="cleaning up dead shim" Feb 9 09:44:50.720454 env[1142]: time="2024-02-09T09:44:50.720414992Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4327 runtime=io.containerd.runc.v2\n" Feb 9 09:44:51.407117 kubelet[2010]: E0209 09:44:51.407080 2010 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-95kp5" podUID=5bea2de5-c3e7-41b4-80fb-523511ad7bd9 Feb 9 09:44:51.465763 kubelet[2010]: E0209 09:44:51.465735 2010 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:44:51.550831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14-rootfs.mount: Deactivated successfully. Feb 9 09:44:51.626234 kubelet[2010]: E0209 09:44:51.626196 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:51.629557 env[1142]: time="2024-02-09T09:44:51.628367539Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:44:51.679450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684596872.mount: Deactivated successfully. Feb 9 09:44:51.683857 env[1142]: time="2024-02-09T09:44:51.683814549Z" level=info msg="CreateContainer within sandbox \"cf749d9d5a40b6c9ca01a6455860cb0d3bfa18d596f123a7f7f562528cb56a39\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a68372b244a627c43fc4564e7274601081a2fc85f82f2e84d8745704343321a\"" Feb 9 09:44:51.684423 env[1142]: time="2024-02-09T09:44:51.684390992Z" level=info msg="StartContainer for \"4a68372b244a627c43fc4564e7274601081a2fc85f82f2e84d8745704343321a\"" Feb 9 09:44:51.698572 systemd[1]: Started cri-containerd-4a68372b244a627c43fc4564e7274601081a2fc85f82f2e84d8745704343321a.scope. Feb 9 09:44:51.740799 env[1142]: time="2024-02-09T09:44:51.739268639Z" level=info msg="StartContainer for \"4a68372b244a627c43fc4564e7274601081a2fc85f82f2e84d8745704343321a\" returns successfully" Feb 9 09:44:51.980308 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 09:44:51.982034 kubelet[2010]: W0209 09:44:51.982000 2010 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196aed5f_ce23_46ce_ab7d_bb04c81b493f.slice/cri-containerd-add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861.scope WatchSource:0}: task add5b87465a67a709f42e066dfeb4cd2d62e67d0f9603a2da33bc3fd185c9861 not found: not found Feb 9 09:44:52.631182 kubelet[2010]: E0209 09:44:52.631146 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:52.644684 kubelet[2010]: I0209 09:44:52.644637 2010 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7mzkw" podStartSLOduration=5.644597968 pod.CreationTimestamp="2024-02-09 09:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:44:52.644374006 +0000 UTC m=+101.361345997" watchObservedRunningTime="2024-02-09 09:44:52.644597968 +0000 UTC m=+101.361569919" Feb 9 09:44:53.233772 kubelet[2010]: I0209 09:44:53.232943 2010 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:44:53.232832355 +0000 UTC m=+101.949804306 LastTransitionTime:2024-02-09 09:44:53.232832355 +0000 UTC m=+101.949804306 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:44:53.406840 kubelet[2010]: E0209 09:44:53.406801 2010 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-95kp5" podUID=5bea2de5-c3e7-41b4-80fb-523511ad7bd9 Feb 9 09:44:53.632934 kubelet[2010]: E0209 09:44:53.632898 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:53.825100 systemd[1]: run-containerd-runc-k8s.io-4a68372b244a627c43fc4564e7274601081a2fc85f82f2e84d8745704343321a-runc.4qt8qN.mount: Deactivated successfully. Feb 9 09:44:54.634538 kubelet[2010]: E0209 09:44:54.634506 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:54.666094 systemd-networkd[1046]: lxc_health: Link UP Feb 9 09:44:54.672815 systemd-networkd[1046]: lxc_health: Gained carrier Feb 9 09:44:54.673522 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:44:55.087070 kubelet[2010]: W0209 09:44:55.086917 2010 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196aed5f_ce23_46ce_ab7d_bb04c81b493f.slice/cri-containerd-3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6.scope WatchSource:0}: task 3f442c13dc555d5b47101e1b60ef4c8080889dfae4aa263e983f0bfaec84f0e6 not found: not found Feb 9 09:44:55.406952 kubelet[2010]: E0209 09:44:55.406553 2010 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-95kp5" podUID=5bea2de5-c3e7-41b4-80fb-523511ad7bd9 Feb 9 09:44:55.947037 kubelet[2010]: E0209 09:44:55.947000 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:56.045808 systemd-networkd[1046]: lxc_health: Gained IPv6LL Feb 9 09:44:56.641948 kubelet[2010]: E0209 09:44:56.641917 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:57.407574 kubelet[2010]: E0209 09:44:57.407542 2010 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:58.095009 systemd[1]: run-containerd-runc-k8s.io-4a68372b244a627c43fc4564e7274601081a2fc85f82f2e84d8745704343321a-runc.gYQoiK.mount: Deactivated successfully. Feb 9 09:44:58.194127 kubelet[2010]: W0209 09:44:58.194084 2010 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196aed5f_ce23_46ce_ab7d_bb04c81b493f.slice/cri-containerd-e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205.scope WatchSource:0}: task e6759f0a21b1ff4175a199db6c5ec79638e945f03603d240ee1e0ac01d503205 not found: not found Feb 9 09:45:00.218111 systemd[1]: run-containerd-runc-k8s.io-4a68372b244a627c43fc4564e7274601081a2fc85f82f2e84d8745704343321a-runc.BzDbGj.mount: Deactivated successfully. Feb 9 09:45:00.297697 sshd[3924]: pam_unix(sshd:session): session closed for user core Feb 9 09:45:00.300034 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:47862.service: Deactivated successfully. Feb 9 09:45:00.300751 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:45:00.301258 systemd-logind[1128]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:45:00.302132 systemd-logind[1128]: Removed session 25. Feb 9 09:45:01.300018 kubelet[2010]: W0209 09:45:01.299977 2010 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196aed5f_ce23_46ce_ab7d_bb04c81b493f.slice/cri-containerd-6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14.scope WatchSource:0}: task 6ade4205e8ddc94cb59c571c0a4f8b8b46c362597b835cd8ad32b22daa22fd14 not found: not found