Feb 9 09:45:24.722139 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:45:24.722159 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:45:24.722166 kernel: efi: EFI v2.70 by EDK II Feb 9 09:45:24.722172 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 09:45:24.722177 kernel: random: crng init done Feb 9 09:45:24.722182 kernel: ACPI: Early table checksum verification disabled Feb 9 09:45:24.722188 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 09:45:24.722194 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 09:45:24.722200 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722205 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722210 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722216 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722221 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722226 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722234 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722240 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722246 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:45:24.722251 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 09:45:24.722257 kernel: NUMA: Failed to initialise from firmware Feb 9 09:45:24.722263 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:45:24.722269 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] Feb 9 09:45:24.722274 kernel: Zone ranges: Feb 9 09:45:24.722280 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:45:24.722286 kernel: DMA32 empty Feb 9 09:45:24.722292 kernel: Normal empty Feb 9 09:45:24.722297 kernel: Movable zone start for each node Feb 9 09:45:24.722303 kernel: Early memory node ranges Feb 9 09:45:24.722308 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 09:45:24.722314 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 09:45:24.722320 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 09:45:24.722325 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 09:45:24.722331 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 09:45:24.722337 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 09:45:24.722342 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 09:45:24.722348 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:45:24.722355 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 09:45:24.722360 kernel: psci: probing for conduit method from ACPI. Feb 9 09:45:24.722366 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:45:24.722371 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:45:24.722377 kernel: psci: Trusted OS migration not required Feb 9 09:45:24.722385 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:45:24.722391 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 09:45:24.722399 kernel: ACPI: SRAT not present Feb 9 09:45:24.722405 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:45:24.722411 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:45:24.722417 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 09:45:24.722423 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:45:24.722429 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:45:24.722435 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:45:24.722441 kernel: CPU features: detected: Spectre-v4 Feb 9 09:45:24.722447 kernel: CPU features: detected: Spectre-BHB Feb 9 09:45:24.722454 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:45:24.722460 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:45:24.722466 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:45:24.722473 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 09:45:24.722478 kernel: Policy zone: DMA Feb 9 09:45:24.722485 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:45:24.722492 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:45:24.722498 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:45:24.722504 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:45:24.722510 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:45:24.722517 kernel: Memory: 2459156K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113132K reserved, 0K cma-reserved) Feb 9 09:45:24.722524 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 09:45:24.722531 kernel: trace event string verifier disabled Feb 9 09:45:24.722537 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:45:24.722543 kernel: rcu: RCU event tracing is enabled. Feb 9 09:45:24.722549 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 09:45:24.722555 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:45:24.722562 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:45:24.722568 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:45:24.722574 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 09:45:24.722580 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:45:24.722586 kernel: GICv3: 256 SPIs implemented Feb 9 09:45:24.722593 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:45:24.722599 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:45:24.722605 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:45:24.722611 kernel: GICv3: 16 PPIs implemented Feb 9 09:45:24.722617 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 09:45:24.722623 kernel: ACPI: SRAT not present Feb 9 09:45:24.722629 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 09:45:24.722635 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:45:24.722641 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:45:24.722658 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 09:45:24.722664 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 09:45:24.722671 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:45:24.722678 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:45:24.722684 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:45:24.722691 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:45:24.722697 kernel: arm-pv: using stolen time PV Feb 9 09:45:24.722703 kernel: Console: colour dummy device 80x25 Feb 9 09:45:24.722709 kernel: ACPI: Core revision 20210730 Feb 9 09:45:24.722715 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:45:24.722722 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:45:24.722728 kernel: LSM: Security Framework initializing Feb 9 09:45:24.722734 kernel: SELinux: Initializing. Feb 9 09:45:24.722741 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:45:24.722748 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:45:24.722754 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:45:24.722760 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 09:45:24.722766 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 09:45:24.722772 kernel: Remapping and enabling EFI services. Feb 9 09:45:24.722778 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:45:24.722784 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:45:24.722791 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 09:45:24.722798 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 09:45:24.722805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:45:24.722811 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:45:24.722817 kernel: Detected PIPT I-cache on CPU2 Feb 9 09:45:24.722823 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 09:45:24.722830 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 09:45:24.722836 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:45:24.722842 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 09:45:24.722848 kernel: Detected PIPT I-cache on CPU3 Feb 9 09:45:24.722854 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 09:45:24.722861 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 09:45:24.722867 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:45:24.722873 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 09:45:24.722880 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 09:45:24.722890 kernel: SMP: Total of 4 processors activated. Feb 9 09:45:24.722903 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:45:24.722910 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:45:24.722916 kernel: CPU features: detected: Common not Private translations Feb 9 09:45:24.722923 kernel: CPU features: detected: CRC32 instructions Feb 9 09:45:24.722929 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:45:24.722936 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:45:24.722942 kernel: CPU features: detected: Privileged Access Never Feb 9 09:45:24.722950 kernel: CPU features: detected: RAS Extension Support Feb 9 09:45:24.722957 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 09:45:24.722963 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:45:24.722970 kernel: alternatives: patching kernel code Feb 9 09:45:24.722978 kernel: devtmpfs: initialized Feb 9 09:45:24.722984 kernel: KASLR enabled Feb 9 09:45:24.722991 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:45:24.722997 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 09:45:24.723004 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:45:24.723010 kernel: SMBIOS 3.0.0 present. Feb 9 09:45:24.723017 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 09:45:24.723023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:45:24.723030 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:45:24.723036 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:45:24.723044 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:45:24.723050 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:45:24.723057 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Feb 9 09:45:24.723064 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:45:24.723070 kernel: cpuidle: using governor menu Feb 9 09:45:24.723077 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:45:24.723083 kernel: ASID allocator initialised with 32768 entries Feb 9 09:45:24.723089 kernel: ACPI: bus type PCI registered Feb 9 09:45:24.723096 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:45:24.723103 kernel: Serial: AMBA PL011 UART driver Feb 9 09:45:24.723110 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:45:24.723116 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:45:24.723123 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:45:24.723129 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:45:24.723136 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:45:24.723142 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:45:24.723149 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:45:24.723155 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:45:24.723163 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:45:24.723169 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:45:24.723176 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:45:24.723182 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:45:24.723189 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:45:24.723195 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:45:24.723202 kernel: ACPI: Interpreter enabled Feb 9 09:45:24.723208 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:45:24.723214 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:45:24.723222 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:45:24.723229 kernel: printk: console [ttyAMA0] enabled Feb 9 09:45:24.723235 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 09:45:24.723351 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:45:24.723414 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:45:24.723471 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:45:24.723527 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 09:45:24.723586 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 09:45:24.723595 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 09:45:24.723602 kernel: PCI host bridge to bus 0000:00 Feb 9 09:45:24.723710 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 09:45:24.723770 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:45:24.723824 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 09:45:24.723880 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 09:45:24.724044 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 09:45:24.724118 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 09:45:24.724177 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 09:45:24.724235 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 09:45:24.724293 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:45:24.724351 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:45:24.724409 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 09:45:24.724470 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 09:45:24.724523 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 09:45:24.724574 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:45:24.724625 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 09:45:24.724634 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:45:24.724641 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:45:24.724657 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:45:24.724666 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:45:24.724673 kernel: iommu: Default domain type: Translated Feb 9 09:45:24.724679 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:45:24.724686 kernel: vgaarb: loaded Feb 9 09:45:24.724692 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:45:24.724699 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:45:24.724705 kernel: PTP clock support registered Feb 9 09:45:24.724712 kernel: Registered efivars operations Feb 9 09:45:24.724718 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:45:24.724725 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:45:24.724733 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:45:24.724739 kernel: pnp: PnP ACPI init Feb 9 09:45:24.724805 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 09:45:24.724815 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:45:24.724821 kernel: NET: Registered PF_INET protocol family Feb 9 09:45:24.724828 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:45:24.724835 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:45:24.724841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:45:24.724850 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:45:24.724856 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:45:24.724863 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:45:24.724869 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:45:24.724876 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:45:24.724882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:45:24.724889 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:45:24.724896 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 09:45:24.724910 kernel: kvm [1]: HYP mode not available Feb 9 09:45:24.724916 kernel: Initialise system trusted keyrings Feb 9 09:45:24.724923 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:45:24.724929 kernel: Key type asymmetric registered Feb 9 09:45:24.724935 kernel: Asymmetric key parser 'x509' registered Feb 9 09:45:24.724942 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:45:24.724948 kernel: io scheduler mq-deadline registered Feb 9 09:45:24.724955 kernel: io scheduler kyber registered Feb 9 09:45:24.724961 kernel: io scheduler bfq registered Feb 9 09:45:24.724968 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:45:24.724975 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:45:24.724982 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:45:24.725044 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 09:45:24.725053 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:45:24.725060 kernel: thunder_xcv, ver 1.0 Feb 9 09:45:24.725066 kernel: thunder_bgx, ver 1.0 Feb 9 09:45:24.725073 kernel: nicpf, ver 1.0 Feb 9 09:45:24.725079 kernel: nicvf, ver 1.0 Feb 9 09:45:24.725145 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:45:24.725202 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:45:24 UTC (1707471924) Feb 9 09:45:24.725211 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:45:24.725217 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:45:24.725224 kernel: Segment Routing with IPv6 Feb 9 09:45:24.725230 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:45:24.725237 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:45:24.725243 kernel: Key type dns_resolver registered Feb 9 09:45:24.725250 kernel: registered taskstats version 1 Feb 9 09:45:24.725257 kernel: Loading compiled-in X.509 certificates Feb 9 09:45:24.725264 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:45:24.725284 kernel: Key type .fscrypt registered Feb 9 09:45:24.725290 kernel: Key type fscrypt-provisioning registered Feb 9 09:45:24.725297 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:45:24.725303 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:45:24.725310 kernel: ima: No architecture policies found Feb 9 09:45:24.725316 kernel: Freeing unused kernel memory: 34688K Feb 9 09:45:24.725323 kernel: Run /init as init process Feb 9 09:45:24.725331 kernel: with arguments: Feb 9 09:45:24.725338 kernel: /init Feb 9 09:45:24.725344 kernel: with environment: Feb 9 09:45:24.725350 kernel: HOME=/ Feb 9 09:45:24.725357 kernel: TERM=linux Feb 9 09:45:24.725363 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:45:24.725371 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:45:24.725380 systemd[1]: Detected virtualization kvm. Feb 9 09:45:24.725388 systemd[1]: Detected architecture arm64. Feb 9 09:45:24.725395 systemd[1]: Running in initrd. Feb 9 09:45:24.725402 systemd[1]: No hostname configured, using default hostname. Feb 9 09:45:24.725408 systemd[1]: Hostname set to . Feb 9 09:45:24.725416 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:45:24.725423 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:45:24.725430 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:45:24.725437 systemd[1]: Reached target cryptsetup.target. Feb 9 09:45:24.725445 systemd[1]: Reached target paths.target. Feb 9 09:45:24.725452 systemd[1]: Reached target slices.target. Feb 9 09:45:24.725458 systemd[1]: Reached target swap.target. Feb 9 09:45:24.725465 systemd[1]: Reached target timers.target. Feb 9 09:45:24.725473 systemd[1]: Listening on iscsid.socket. Feb 9 09:45:24.725479 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:45:24.725487 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:45:24.725495 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:45:24.725502 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:45:24.725509 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:45:24.725516 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:45:24.725523 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:45:24.725529 systemd[1]: Reached target sockets.target. Feb 9 09:45:24.725536 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:45:24.725543 systemd[1]: Finished network-cleanup.service. Feb 9 09:45:24.725550 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:45:24.725558 systemd[1]: Starting systemd-journald.service... Feb 9 09:45:24.725565 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:45:24.725572 systemd[1]: Starting systemd-resolved.service... Feb 9 09:45:24.725579 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:45:24.725586 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:45:24.725593 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:45:24.725600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:45:24.725607 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:45:24.725614 kernel: audit: type=1130 audit(1707471924.717:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.725622 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:45:24.725629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:45:24.725636 kernel: audit: type=1130 audit(1707471924.725:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.725646 systemd-journald[292]: Journal started Feb 9 09:45:24.725738 systemd-journald[292]: Runtime Journal (/run/log/journal/0bcf67f734a041369c662fe4ac3f7841) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:45:24.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.713875 systemd-modules-load[293]: Inserted module 'overlay' Feb 9 09:45:24.731126 systemd[1]: Started systemd-journald.service. Feb 9 09:45:24.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.735200 systemd-resolved[294]: Positive Trust Anchors: Feb 9 09:45:24.737520 kernel: audit: type=1130 audit(1707471924.731:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.737542 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:45:24.735212 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:45:24.735239 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:45:24.743122 kernel: Bridge firewalling registered Feb 9 09:45:24.739168 systemd-modules-load[293]: Inserted module 'br_netfilter' Feb 9 09:45:24.739250 systemd-resolved[294]: Defaulting to hostname 'linux'. Feb 9 09:45:24.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.745082 systemd[1]: Started systemd-resolved.service. Feb 9 09:45:24.752730 kernel: audit: type=1130 audit(1707471924.744:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.752748 kernel: audit: type=1130 audit(1707471924.748:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.752758 kernel: SCSI subsystem initialized Feb 9 09:45:24.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.745700 systemd[1]: Reached target nss-lookup.target. Feb 9 09:45:24.748233 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:45:24.749643 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:45:24.758056 dracut-cmdline[312]: dracut-dracut-053 Feb 9 09:45:24.760155 dracut-cmdline[312]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:45:24.764017 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:45:24.764034 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:45:24.764043 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:45:24.766905 systemd-modules-load[293]: Inserted module 'dm_multipath' Feb 9 09:45:24.772077 kernel: audit: type=1130 audit(1707471924.767:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.767828 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:45:24.769180 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:45:24.775872 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:45:24.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.779690 kernel: audit: type=1130 audit(1707471924.776:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.820670 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:45:24.829683 kernel: iscsi: registered transport (tcp) Feb 9 09:45:24.842688 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:45:24.842708 kernel: QLogic iSCSI HBA Driver Feb 9 09:45:24.875500 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:45:24.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.876872 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:45:24.879315 kernel: audit: type=1130 audit(1707471924.875:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:24.925687 kernel: raid6: neonx8 gen() 13750 MB/s Feb 9 09:45:24.942673 kernel: raid6: neonx8 xor() 10836 MB/s Feb 9 09:45:24.959663 kernel: raid6: neonx4 gen() 13529 MB/s Feb 9 09:45:24.976662 kernel: raid6: neonx4 xor() 11221 MB/s Feb 9 09:45:24.993674 kernel: raid6: neonx2 gen() 12941 MB/s Feb 9 09:45:25.010698 kernel: raid6: neonx2 xor() 10248 MB/s Feb 9 09:45:25.027691 kernel: raid6: neonx1 gen() 10505 MB/s Feb 9 09:45:25.044672 kernel: raid6: neonx1 xor() 8790 MB/s Feb 9 09:45:25.061674 kernel: raid6: int64x8 gen() 6295 MB/s Feb 9 09:45:25.078672 kernel: raid6: int64x8 xor() 3548 MB/s Feb 9 09:45:25.095669 kernel: raid6: int64x4 gen() 7215 MB/s Feb 9 09:45:25.112677 kernel: raid6: int64x4 xor() 3853 MB/s Feb 9 09:45:25.129673 kernel: raid6: int64x2 gen() 6155 MB/s Feb 9 09:45:25.146665 kernel: raid6: int64x2 xor() 3322 MB/s Feb 9 09:45:25.163684 kernel: raid6: int64x1 gen() 5043 MB/s Feb 9 09:45:25.180847 kernel: raid6: int64x1 xor() 2646 MB/s Feb 9 09:45:25.180872 kernel: raid6: using algorithm neonx8 gen() 13750 MB/s Feb 9 09:45:25.180904 kernel: raid6: .... xor() 10836 MB/s, rmw enabled Feb 9 09:45:25.180921 kernel: raid6: using neon recovery algorithm Feb 9 09:45:25.191665 kernel: xor: measuring software checksum speed Feb 9 09:45:25.191683 kernel: 8regs : 17257 MB/sec Feb 9 09:45:25.192667 kernel: 32regs : 20749 MB/sec Feb 9 09:45:25.193663 kernel: arm64_neon : 27939 MB/sec Feb 9 09:45:25.193673 kernel: xor: using function: arm64_neon (27939 MB/sec) Feb 9 09:45:25.250679 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:45:25.260672 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:45:25.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:25.263000 audit: BPF prog-id=7 op=LOAD Feb 9 09:45:25.263000 audit: BPF prog-id=8 op=LOAD Feb 9 09:45:25.263675 kernel: audit: type=1130 audit(1707471925.260:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:25.263974 systemd[1]: Starting systemd-udevd.service... Feb 9 09:45:25.277384 systemd-udevd[494]: Using default interface naming scheme 'v252'. Feb 9 09:45:25.280772 systemd[1]: Started systemd-udevd.service. Feb 9 09:45:25.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:25.282623 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:45:25.293447 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation Feb 9 09:45:25.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:25.324727 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:45:25.326214 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:45:25.359063 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:45:25.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:25.389959 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 09:45:25.393089 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:45:25.393126 kernel: GPT:9289727 != 19775487 Feb 9 09:45:25.393135 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:45:25.393144 kernel: GPT:9289727 != 19775487 Feb 9 09:45:25.393915 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:45:25.393943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:45:25.406673 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) Feb 9 09:45:25.411838 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:45:25.415223 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:45:25.417950 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:45:25.418864 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:45:25.422562 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:45:25.424084 systemd[1]: Starting disk-uuid.service... Feb 9 09:45:25.429547 disk-uuid[567]: Primary Header is updated. Feb 9 09:45:25.429547 disk-uuid[567]: Secondary Entries is updated. Feb 9 09:45:25.429547 disk-uuid[567]: Secondary Header is updated. Feb 9 09:45:25.432673 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:45:26.442473 disk-uuid[568]: The operation has completed successfully. Feb 9 09:45:26.443496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:45:26.465796 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:45:26.465889 systemd[1]: Finished disk-uuid.service. Feb 9 09:45:26.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.469783 systemd[1]: Starting verity-setup.service... Feb 9 09:45:26.485678 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:45:26.503848 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:45:26.505720 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:45:26.507458 systemd[1]: Finished verity-setup.service. Feb 9 09:45:26.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.553664 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:45:26.553718 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:45:26.554477 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:45:26.555123 systemd[1]: Starting ignition-setup.service... Feb 9 09:45:26.556742 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:45:26.562838 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:45:26.562873 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:45:26.562883 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:45:26.572456 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:45:26.577717 systemd[1]: Finished ignition-setup.service. Feb 9 09:45:26.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.579113 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:45:26.640590 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:45:26.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.641000 audit: BPF prog-id=9 op=LOAD Feb 9 09:45:26.642618 systemd[1]: Starting systemd-networkd.service... Feb 9 09:45:26.658280 ignition[661]: Ignition 2.14.0 Feb 9 09:45:26.658290 ignition[661]: Stage: fetch-offline Feb 9 09:45:26.658323 ignition[661]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:45:26.658331 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:45:26.658440 ignition[661]: parsed url from cmdline: "" Feb 9 09:45:26.658443 ignition[661]: no config URL provided Feb 9 09:45:26.658447 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:45:26.658454 ignition[661]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:45:26.658470 ignition[661]: op(1): [started] loading QEMU firmware config module Feb 9 09:45:26.658474 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 09:45:26.665141 ignition[661]: op(1): [finished] loading QEMU firmware config module Feb 9 09:45:26.672017 systemd-networkd[743]: lo: Link UP Feb 9 09:45:26.672679 systemd-networkd[743]: lo: Gained carrier Feb 9 09:45:26.673822 systemd-networkd[743]: Enumeration completed Feb 9 09:45:26.674523 systemd[1]: Started systemd-networkd.service. Feb 9 09:45:26.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.675389 systemd[1]: Reached target network.target. Feb 9 09:45:26.675963 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:45:26.676887 systemd-networkd[743]: eth0: Link UP Feb 9 09:45:26.676899 systemd-networkd[743]: eth0: Gained carrier Feb 9 09:45:26.678520 systemd[1]: Starting iscsiuio.service... Feb 9 09:45:26.687569 systemd[1]: Started iscsiuio.service. Feb 9 09:45:26.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.689780 systemd[1]: Starting iscsid.service... Feb 9 09:45:26.693062 iscsid[750]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:45:26.693062 iscsid[750]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:45:26.693062 iscsid[750]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:45:26.693062 iscsid[750]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:45:26.693062 iscsid[750]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:45:26.693062 iscsid[750]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:45:26.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.696015 systemd[1]: Started iscsid.service. Feb 9 09:45:26.698754 systemd-networkd[743]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:45:26.699973 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:45:26.710309 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:45:26.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.711183 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:45:26.712318 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:45:26.713526 systemd[1]: Reached target remote-fs.target. Feb 9 09:45:26.715421 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:45:26.722698 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:45:26.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.728698 ignition[661]: parsing config with SHA512: ae24c8530806f547e22839001565cf41f019b75dd2b5bb53805a8762d40653909dbf445822072d422edd0949b042644db7caf9bffda54d70552a9268e96ec9d5 Feb 9 09:45:26.770316 unknown[661]: fetched base config from "system" Feb 9 09:45:26.770327 unknown[661]: fetched user config from "qemu" Feb 9 09:45:26.770839 ignition[661]: fetch-offline: fetch-offline passed Feb 9 09:45:26.770906 ignition[661]: Ignition finished successfully Feb 9 09:45:26.772524 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:45:26.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.773332 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:45:26.773946 systemd[1]: Starting ignition-kargs.service... Feb 9 09:45:26.781897 ignition[765]: Ignition 2.14.0 Feb 9 09:45:26.781907 ignition[765]: Stage: kargs Feb 9 09:45:26.781986 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:45:26.781995 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:45:26.782920 ignition[765]: kargs: kargs passed Feb 9 09:45:26.782956 ignition[765]: Ignition finished successfully Feb 9 09:45:26.786208 systemd[1]: Finished ignition-kargs.service. Feb 9 09:45:26.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.787454 systemd[1]: Starting ignition-disks.service... Feb 9 09:45:26.793372 ignition[771]: Ignition 2.14.0 Feb 9 09:45:26.793381 ignition[771]: Stage: disks Feb 9 09:45:26.793458 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:45:26.793467 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:45:26.794496 ignition[771]: disks: disks passed Feb 9 09:45:26.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.795692 systemd[1]: Finished ignition-disks.service. Feb 9 09:45:26.794531 ignition[771]: Ignition finished successfully Feb 9 09:45:26.796333 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:45:26.797279 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:45:26.798276 systemd[1]: Reached target local-fs.target. Feb 9 09:45:26.799354 systemd[1]: Reached target sysinit.target. Feb 9 09:45:26.800345 systemd[1]: Reached target basic.target. Feb 9 09:45:26.801956 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:45:26.812331 systemd-fsck[779]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:45:26.815471 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:45:26.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.817102 systemd[1]: Mounting sysroot.mount... Feb 9 09:45:26.823403 systemd[1]: Mounted sysroot.mount. Feb 9 09:45:26.824367 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:45:26.824030 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:45:26.825934 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:45:26.826708 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:45:26.826743 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:45:26.826764 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:45:26.828331 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:45:26.829635 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:45:26.833680 initrd-setup-root[789]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:45:26.837632 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:45:26.841234 initrd-setup-root[805]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:45:26.844710 initrd-setup-root[813]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:45:26.869277 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:45:26.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.870642 systemd[1]: Starting ignition-mount.service... Feb 9 09:45:26.871782 systemd[1]: Starting sysroot-boot.service... Feb 9 09:45:26.876261 bash[830]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 09:45:26.884003 ignition[832]: INFO : Ignition 2.14.0 Feb 9 09:45:26.884757 ignition[832]: INFO : Stage: mount Feb 9 09:45:26.885208 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:45:26.885208 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:45:26.886578 ignition[832]: INFO : mount: mount passed Feb 9 09:45:26.886578 ignition[832]: INFO : Ignition finished successfully Feb 9 09:45:26.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:26.886634 systemd[1]: Finished ignition-mount.service. Feb 9 09:45:26.889767 systemd[1]: Finished sysroot-boot.service. Feb 9 09:45:26.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:27.513922 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:45:27.519662 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (840) Feb 9 09:45:27.520973 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:45:27.520987 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:45:27.520997 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:45:27.524102 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:45:27.525466 systemd[1]: Starting ignition-files.service... Feb 9 09:45:27.538757 ignition[860]: INFO : Ignition 2.14.0 Feb 9 09:45:27.538757 ignition[860]: INFO : Stage: files Feb 9 09:45:27.539933 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:45:27.539933 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:45:27.541479 ignition[860]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:45:27.542307 ignition[860]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:45:27.542307 ignition[860]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:45:27.545120 ignition[860]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:45:27.546088 ignition[860]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:45:27.547212 unknown[860]: wrote ssh authorized keys file for user: core Feb 9 09:45:27.547985 ignition[860]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:45:27.547985 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:45:27.547985 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 09:45:27.908504 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:45:28.147036 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 09:45:28.149126 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:45:28.149126 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:45:28.149126 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:45:28.205994 systemd-networkd[743]: eth0: Gained IPv6LL Feb 9 09:45:28.411905 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:45:28.534873 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 09:45:28.537064 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:45:28.537064 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:45:28.537064 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:45:28.559691 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:45:28.596834 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:45:28.598280 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:45:28.598280 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:45:28.804171 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:45:32.389979 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 9 09:45:32.389979 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:45:32.393287 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:45:32.393287 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:45:32.437424 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:45:40.018353 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 9 09:45:40.020606 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:45:40.021807 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:45:40.021807 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:45:40.193409 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:45:44.324951 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 6a5c9c02a29126949f096415bb1761a0c0ad44168e2ab3d0409982701da58f96223bec354828ddf958e945ef1ce63c0ad41e77cbcbcce0756163e71b4fbae432 Feb 9 09:45:44.324951 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:45:44.328945 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:45:44.328945 ignition[860]: INFO : files: op(f): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:45:44.328945 ignition[860]: INFO : files: op(f): op(10): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:45:44.328945 ignition[860]: INFO : files: op(f): op(10): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:45:44.328945 ignition[860]: INFO : files: op(f): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:45:44.328945 ignition[860]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(13): [started] processing unit "prepare-helm.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(13): op(14): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(13): op(14): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(13): [finished] processing unit "prepare-helm.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(15): [started] processing unit "coreos-metadata.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(15): op(16): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(15): op(16): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(15): [finished] processing unit "coreos-metadata.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 09:45:44.354819 ignition[860]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:45:44.390729 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 09:45:44.390751 kernel: audit: type=1130 audit(1707471944.364:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.390767 kernel: audit: type=1130 audit(1707471944.372:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.390777 kernel: audit: type=1131 audit(1707471944.372:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.390787 kernel: audit: type=1130 audit(1707471944.382:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.390913 ignition[860]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:45:44.390913 ignition[860]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 09:45:44.390913 ignition[860]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:45:44.390913 ignition[860]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:45:44.390913 ignition[860]: INFO : files: files passed Feb 9 09:45:44.390913 ignition[860]: INFO : Ignition finished successfully Feb 9 09:45:44.364640 systemd[1]: Finished ignition-files.service. Feb 9 09:45:44.366241 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:45:44.397818 initrd-setup-root-after-ignition[885]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 09:45:44.402794 kernel: audit: type=1130 audit(1707471944.397:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.402813 kernel: audit: type=1131 audit(1707471944.397:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.369299 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:45:44.404602 initrd-setup-root-after-ignition[887]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:45:44.369913 systemd[1]: Starting ignition-quench.service... Feb 9 09:45:44.372364 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:45:44.372441 systemd[1]: Finished ignition-quench.service. Feb 9 09:45:44.381005 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:45:44.382190 systemd[1]: Reached target ignition-complete.target. Feb 9 09:45:44.386101 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:45:44.397657 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:45:44.397745 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:45:44.398585 systemd[1]: Reached target initrd-fs.target. Feb 9 09:45:44.403425 systemd[1]: Reached target initrd.target. Feb 9 09:45:44.405260 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:45:44.405930 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:45:44.415638 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:45:44.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.416984 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:45:44.419615 kernel: audit: type=1130 audit(1707471944.415:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.424172 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:45:44.424974 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:45:44.426177 systemd[1]: Stopped target timers.target. Feb 9 09:45:44.427339 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:45:44.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.427434 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:45:44.431566 kernel: audit: type=1131 audit(1707471944.427:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.428458 systemd[1]: Stopped target initrd.target. Feb 9 09:45:44.431218 systemd[1]: Stopped target basic.target. Feb 9 09:45:44.432240 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:45:44.433371 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:45:44.434421 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:45:44.435596 systemd[1]: Stopped target remote-fs.target. Feb 9 09:45:44.436696 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:45:44.437892 systemd[1]: Stopped target sysinit.target. Feb 9 09:45:44.438894 systemd[1]: Stopped target local-fs.target. Feb 9 09:45:44.440011 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:45:44.441077 systemd[1]: Stopped target swap.target. Feb 9 09:45:44.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.442065 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:45:44.446443 kernel: audit: type=1131 audit(1707471944.443:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.442165 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:45:44.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.443262 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:45:44.450386 kernel: audit: type=1131 audit(1707471944.447:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.445944 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:45:44.446041 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:45:44.447251 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:45:44.447339 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:45:44.450086 systemd[1]: Stopped target paths.target. Feb 9 09:45:44.451037 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:45:44.454674 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:45:44.456076 systemd[1]: Stopped target slices.target. Feb 9 09:45:44.456776 systemd[1]: Stopped target sockets.target. Feb 9 09:45:44.457843 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:45:44.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.457942 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:45:44.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.459063 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:45:44.459146 systemd[1]: Stopped ignition-files.service. Feb 9 09:45:44.461106 systemd[1]: Stopping ignition-mount.service... Feb 9 09:45:44.463540 iscsid[750]: iscsid shutting down. Feb 9 09:45:44.462231 systemd[1]: Stopping iscsid.service... Feb 9 09:45:44.463675 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:45:44.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.464771 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:45:44.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.464927 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:45:44.466081 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:45:44.469420 ignition[900]: INFO : Ignition 2.14.0 Feb 9 09:45:44.469420 ignition[900]: INFO : Stage: umount Feb 9 09:45:44.469420 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:45:44.469420 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:45:44.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.466166 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:45:44.474078 ignition[900]: INFO : umount: umount passed Feb 9 09:45:44.474078 ignition[900]: INFO : Ignition finished successfully Feb 9 09:45:44.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.468507 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:45:44.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.468600 systemd[1]: Stopped iscsid.service. Feb 9 09:45:44.470403 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:45:44.470463 systemd[1]: Closed iscsid.socket. Feb 9 09:45:44.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.472126 systemd[1]: Stopping iscsiuio.service... Feb 9 09:45:44.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.473586 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:45:44.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.473688 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:45:44.474821 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:45:44.474905 systemd[1]: Stopped ignition-mount.service. Feb 9 09:45:44.476059 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:45:44.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.476135 systemd[1]: Stopped iscsiuio.service. Feb 9 09:45:44.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.477878 systemd[1]: Stopped target network.target. Feb 9 09:45:44.478617 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:45:44.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.478646 systemd[1]: Closed iscsiuio.socket. Feb 9 09:45:44.479665 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:45:44.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.479703 systemd[1]: Stopped ignition-disks.service. Feb 9 09:45:44.480903 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:45:44.480938 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:45:44.496000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:45:44.482141 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:45:44.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.482183 systemd[1]: Stopped ignition-setup.service. Feb 9 09:45:44.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.483298 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:45:44.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.484588 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:45:44.486363 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:45:44.486796 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:45:44.486879 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:45:44.487637 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:45:44.487683 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:45:44.490401 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:45:44.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.490482 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:45:44.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.491893 systemd-networkd[743]: eth0: DHCPv6 lease lost Feb 9 09:45:44.507000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:45:44.492812 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:45:44.492900 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:45:44.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.494004 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:45:44.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.494036 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:45:44.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.495560 systemd[1]: Stopping network-cleanup.service... Feb 9 09:45:44.496838 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:45:44.496890 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:45:44.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.497990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:45:44.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.498028 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:45:44.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.499557 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:45:44.499592 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:45:44.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.500426 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:45:44.502372 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:45:44.504926 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:45:44.505006 systemd[1]: Stopped network-cleanup.service. Feb 9 09:45:44.506886 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:45:44.506993 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:45:44.508104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:45:44.508135 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:45:44.509136 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:45:44.509164 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:45:44.510238 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:45:44.510275 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:45:44.511445 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:45:44.511480 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:45:44.512438 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:45:44.512471 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:45:44.514352 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:45:44.515404 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:45:44.515461 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:45:44.517350 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:45:44.517387 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:45:44.518148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:45:44.518182 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:45:44.520076 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:45:44.520436 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:45:44.520505 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:45:44.521438 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:45:44.523049 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:45:44.528605 systemd[1]: Switching root. Feb 9 09:45:44.545736 systemd-journald[292]: Journal stopped Feb 9 09:45:46.477130 systemd-journald[292]: Received SIGTERM from PID 1 (systemd). Feb 9 09:45:46.477191 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:45:46.477203 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:45:46.477213 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:45:46.477223 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:45:46.477232 kernel: SELinux: policy capability open_perms=1 Feb 9 09:45:46.477241 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:45:46.477250 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:45:46.477260 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:45:46.477271 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:45:46.477280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:45:46.477290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:45:46.477300 systemd[1]: Successfully loaded SELinux policy in 30.274ms. Feb 9 09:45:46.477323 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.926ms. Feb 9 09:45:46.477335 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:45:46.477346 systemd[1]: Detected virtualization kvm. Feb 9 09:45:46.477358 systemd[1]: Detected architecture arm64. Feb 9 09:45:46.477368 systemd[1]: Detected first boot. Feb 9 09:45:46.477378 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:45:46.477388 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:45:46.477398 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:45:46.477408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:45:46.477419 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:45:46.477431 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:45:46.477444 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:45:46.477454 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:45:46.477464 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:45:46.477474 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:45:46.477485 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:45:46.477495 systemd[1]: Created slice system-getty.slice. Feb 9 09:45:46.477505 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:45:46.477518 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:45:46.477530 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:45:46.477540 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:45:46.477556 systemd[1]: Created slice user.slice. Feb 9 09:45:46.477566 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:45:46.477579 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:45:46.477590 systemd[1]: Set up automount boot.automount. Feb 9 09:45:46.477602 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:45:46.477613 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:45:46.477623 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:45:46.477638 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:45:46.477667 systemd[1]: Reached target integritysetup.target. Feb 9 09:45:46.477679 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:45:46.477690 systemd[1]: Reached target remote-fs.target. Feb 9 09:45:46.477701 systemd[1]: Reached target slices.target. Feb 9 09:45:46.477712 systemd[1]: Reached target swap.target. Feb 9 09:45:46.477722 systemd[1]: Reached target torcx.target. Feb 9 09:45:46.477732 systemd[1]: Reached target veritysetup.target. Feb 9 09:45:46.477743 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:45:46.477754 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:45:46.477764 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:45:46.477775 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:45:46.477785 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:45:46.477796 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:45:46.477806 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:45:46.477816 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:45:46.477832 systemd[1]: Mounting media.mount... Feb 9 09:45:46.477844 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:45:46.477855 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:45:46.477865 systemd[1]: Mounting tmp.mount... Feb 9 09:45:46.477875 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:45:46.477886 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:45:46.477900 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:45:46.477911 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:45:46.477921 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:45:46.477930 systemd[1]: Starting modprobe@drm.service... Feb 9 09:45:46.477942 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:45:46.477952 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:45:46.477962 systemd[1]: Starting modprobe@loop.service... Feb 9 09:45:46.477973 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:45:46.477984 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:45:46.477994 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:45:46.478004 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:45:46.478014 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:45:46.478025 systemd[1]: Stopped systemd-journald.service. Feb 9 09:45:46.478037 kernel: fuse: init (API version 7.34) Feb 9 09:45:46.478046 kernel: loop: module loaded Feb 9 09:45:46.478057 systemd[1]: Starting systemd-journald.service... Feb 9 09:45:46.478069 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:45:46.478079 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:45:46.478090 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:45:46.478100 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:45:46.478110 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:45:46.478121 systemd[1]: Stopped verity-setup.service. Feb 9 09:45:46.478131 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:45:46.478141 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:45:46.478151 systemd[1]: Mounted media.mount. Feb 9 09:45:46.478162 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:45:46.478172 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:45:46.478182 systemd[1]: Mounted tmp.mount. Feb 9 09:45:46.478192 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:45:46.478203 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:45:46.478213 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:45:46.478223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:45:46.478233 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:45:46.478246 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:45:46.478258 systemd[1]: Finished modprobe@drm.service. Feb 9 09:45:46.478271 systemd-journald[1000]: Journal started Feb 9 09:45:46.478310 systemd-journald[1000]: Runtime Journal (/run/log/journal/0bcf67f734a041369c662fe4ac3f7841) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:45:44.610000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:45:44.648000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:45:44.648000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:45:44.648000 audit: BPF prog-id=10 op=LOAD Feb 9 09:45:44.648000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:45:44.648000 audit: BPF prog-id=11 op=LOAD Feb 9 09:45:44.648000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:45:44.692000 audit[933]: AVC avc: denied { associate } for pid=933 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:45:44.692000 audit[933]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022542 a1=4000028528 a2=4000026a00 a3=32 items=0 ppid=916 pid=933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:44.692000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:45:44.693000 audit[933]: AVC avc: denied { associate } for pid=933 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:45:44.693000 audit[933]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022619 a2=1ed a3=0 items=2 ppid=916 pid=933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:44.693000 audit: CWD cwd="/" Feb 9 09:45:44.693000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:45:44.693000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:45:44.693000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:45:46.354000 audit: BPF prog-id=12 op=LOAD Feb 9 09:45:46.354000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:45:46.354000 audit: BPF prog-id=13 op=LOAD Feb 9 09:45:46.354000 audit: BPF prog-id=14 op=LOAD Feb 9 09:45:46.354000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:45:46.354000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:45:46.355000 audit: BPF prog-id=15 op=LOAD Feb 9 09:45:46.355000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:45:46.355000 audit: BPF prog-id=16 op=LOAD Feb 9 09:45:46.355000 audit: BPF prog-id=17 op=LOAD Feb 9 09:45:46.355000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:45:46.355000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:45:46.356000 audit: BPF prog-id=18 op=LOAD Feb 9 09:45:46.356000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:45:46.356000 audit: BPF prog-id=19 op=LOAD Feb 9 09:45:46.356000 audit: BPF prog-id=20 op=LOAD Feb 9 09:45:46.356000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:45:46.356000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:45:46.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.370000 audit: BPF prog-id=18 op=UNLOAD Feb 9 09:45:46.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.446000 audit: BPF prog-id=21 op=LOAD Feb 9 09:45:46.446000 audit: BPF prog-id=22 op=LOAD Feb 9 09:45:46.446000 audit: BPF prog-id=23 op=LOAD Feb 9 09:45:46.446000 audit: BPF prog-id=19 op=UNLOAD Feb 9 09:45:46.446000 audit: BPF prog-id=20 op=UNLOAD Feb 9 09:45:46.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.475000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:45:46.475000 audit[1000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffce61f790 a2=4000 a3=1 items=0 ppid=1 pid=1000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:46.475000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:45:46.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.690924 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:45:46.353883 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:45:44.691420 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:45:46.353895 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 09:45:44.691438 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:45:46.357370 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:45:44.691468 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:45:44.691478 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:45:44.691506 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:45:46.480176 systemd[1]: Started systemd-journald.service. Feb 9 09:45:44.691518 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:45:44.691729 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:45:44.691979 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:45:44.691997 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:45:46.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.692422 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:45:44.692453 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:45:44.692471 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:45:44.692486 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:45:44.692504 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:45:44.692516 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:45:46.114597 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:46Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:45:46.480920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:45:46.114888 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:46Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:45:46.114992 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:46Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:45:46.481090 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:45:46.115141 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:46Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:45:46.115188 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:46Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:45:46.115245 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2024-02-09T09:45:46Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:45:46.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.482291 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:45:46.482442 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:45:46.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.483558 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:45:46.483725 systemd[1]: Finished modprobe@loop.service. Feb 9 09:45:46.484795 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:45:46.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.485914 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:45:46.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.487081 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:45:46.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.488268 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:45:46.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.489517 systemd[1]: Reached target network-pre.target. Feb 9 09:45:46.491504 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:45:46.493511 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:45:46.494472 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:45:46.496153 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:45:46.498161 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:45:46.498921 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:45:46.499994 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:45:46.500677 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:45:46.501768 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:45:46.502468 systemd-journald[1000]: Time spent on flushing to /var/log/journal/0bcf67f734a041369c662fe4ac3f7841 is 13.358ms for 1033 entries. Feb 9 09:45:46.502468 systemd-journald[1000]: System Journal (/var/log/journal/0bcf67f734a041369c662fe4ac3f7841) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:45:46.527667 systemd-journald[1000]: Received client request to flush runtime journal. Feb 9 09:45:46.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.503631 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:45:46.507753 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:45:46.508876 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:45:46.516320 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:45:46.517425 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:45:46.518337 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:45:46.519410 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:45:46.521442 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:45:46.528605 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:45:46.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.531802 udevadm[1035]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:45:46.541641 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:45:46.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.543680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:45:46.558274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:45:46.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.864713 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:45:46.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.865000 audit: BPF prog-id=24 op=LOAD Feb 9 09:45:46.865000 audit: BPF prog-id=25 op=LOAD Feb 9 09:45:46.865000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:45:46.865000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:45:46.866908 systemd[1]: Starting systemd-udevd.service... Feb 9 09:45:46.891466 systemd-udevd[1039]: Using default interface naming scheme 'v252'. Feb 9 09:45:46.907025 systemd[1]: Started systemd-udevd.service. Feb 9 09:45:46.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.910000 audit: BPF prog-id=26 op=LOAD Feb 9 09:45:46.913941 systemd[1]: Starting systemd-networkd.service... Feb 9 09:45:46.932000 audit: BPF prog-id=27 op=LOAD Feb 9 09:45:46.932000 audit: BPF prog-id=28 op=LOAD Feb 9 09:45:46.932000 audit: BPF prog-id=29 op=LOAD Feb 9 09:45:46.934034 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:45:46.935379 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:45:46.968314 systemd[1]: Started systemd-userdbd.service. Feb 9 09:45:46.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:46.972743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:45:47.005017 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:45:47.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.006773 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:45:47.020521 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:45:47.023804 systemd-networkd[1055]: lo: Link UP Feb 9 09:45:47.023816 systemd-networkd[1055]: lo: Gained carrier Feb 9 09:45:47.026043 systemd-networkd[1055]: Enumeration completed Feb 9 09:45:47.026152 systemd[1]: Started systemd-networkd.service. Feb 9 09:45:47.026158 systemd-networkd[1055]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:45:47.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.027459 systemd-networkd[1055]: eth0: Link UP Feb 9 09:45:47.027469 systemd-networkd[1055]: eth0: Gained carrier Feb 9 09:45:47.052770 systemd-networkd[1055]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:45:47.053433 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:45:47.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.054229 systemd[1]: Reached target cryptsetup.target. Feb 9 09:45:47.055954 systemd[1]: Starting lvm2-activation.service... Feb 9 09:45:47.059275 lvm[1073]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:45:47.087538 systemd[1]: Finished lvm2-activation.service. Feb 9 09:45:47.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.088307 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:45:47.088950 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:45:47.088978 systemd[1]: Reached target local-fs.target. Feb 9 09:45:47.089519 systemd[1]: Reached target machines.target. Feb 9 09:45:47.091283 systemd[1]: Starting ldconfig.service... Feb 9 09:45:47.092182 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:45:47.092235 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:47.093291 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:45:47.095082 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:45:47.097865 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:45:47.098730 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:45:47.098798 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:45:47.099914 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:45:47.102331 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1075 (bootctl) Feb 9 09:45:47.103614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:45:47.108436 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:45:47.110442 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:45:47.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.114325 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:45:47.116120 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:45:47.244655 systemd-fsck[1083]: fsck.fat 4.2 (2021-01-31) Feb 9 09:45:47.244655 systemd-fsck[1083]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 09:45:47.246355 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:45:47.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.248773 systemd[1]: Mounting boot.mount... Feb 9 09:45:47.268622 systemd[1]: Mounted boot.mount. Feb 9 09:45:47.277682 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:45:47.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.285732 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:45:47.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.325338 ldconfig[1074]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:45:47.329182 systemd[1]: Finished ldconfig.service. Feb 9 09:45:47.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.336038 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:45:47.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.337981 systemd[1]: Starting audit-rules.service... Feb 9 09:45:47.339561 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:45:47.341327 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:45:47.342000 audit: BPF prog-id=30 op=LOAD Feb 9 09:45:47.343558 systemd[1]: Starting systemd-resolved.service... Feb 9 09:45:47.344000 audit: BPF prog-id=31 op=LOAD Feb 9 09:45:47.345914 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:45:47.347677 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:45:47.348840 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:45:47.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.350022 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:45:47.351000 audit[1093]: SYSTEM_BOOT pid=1093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.356186 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:45:47.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.360454 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:45:47.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.362471 systemd[1]: Starting systemd-update-done.service... Feb 9 09:45:47.368218 systemd[1]: Finished systemd-update-done.service. Feb 9 09:45:47.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:47.385000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:45:47.385000 audit[1108]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffd127210 a2=420 a3=0 items=0 ppid=1087 pid=1108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:47.385000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:45:47.386473 augenrules[1108]: No rules Feb 9 09:45:47.386987 systemd[1]: Finished audit-rules.service. Feb 9 09:45:47.393696 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:45:47.394562 systemd[1]: Reached target time-set.target. Feb 9 09:45:47.395324 systemd-timesyncd[1092]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 09:45:47.395375 systemd-timesyncd[1092]: Initial clock synchronization to Fri 2024-02-09 09:45:47.225079 UTC. Feb 9 09:45:47.396041 systemd-resolved[1091]: Positive Trust Anchors: Feb 9 09:45:47.396249 systemd-resolved[1091]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:45:47.396324 systemd-resolved[1091]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:45:47.407367 systemd-resolved[1091]: Defaulting to hostname 'linux'. Feb 9 09:45:47.408817 systemd[1]: Started systemd-resolved.service. Feb 9 09:45:47.409468 systemd[1]: Reached target network.target. Feb 9 09:45:47.410042 systemd[1]: Reached target nss-lookup.target. Feb 9 09:45:47.410633 systemd[1]: Reached target sysinit.target. Feb 9 09:45:47.411240 systemd[1]: Started motdgen.path. Feb 9 09:45:47.411768 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:45:47.412633 systemd[1]: Started logrotate.timer. Feb 9 09:45:47.413344 systemd[1]: Started mdadm.timer. Feb 9 09:45:47.413957 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:45:47.414753 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:45:47.414789 systemd[1]: Reached target paths.target. Feb 9 09:45:47.415436 systemd[1]: Reached target timers.target. Feb 9 09:45:47.416333 systemd[1]: Listening on dbus.socket. Feb 9 09:45:47.417840 systemd[1]: Starting docker.socket... Feb 9 09:45:47.420532 systemd[1]: Listening on sshd.socket. Feb 9 09:45:47.421276 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:47.421664 systemd[1]: Listening on docker.socket. Feb 9 09:45:47.422358 systemd[1]: Reached target sockets.target. Feb 9 09:45:47.423038 systemd[1]: Reached target basic.target. Feb 9 09:45:47.423712 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:45:47.423740 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:45:47.424602 systemd[1]: Starting containerd.service... Feb 9 09:45:47.426028 systemd[1]: Starting dbus.service... Feb 9 09:45:47.427785 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:45:47.429902 systemd[1]: Starting extend-filesystems.service... Feb 9 09:45:47.430724 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:45:47.431891 systemd[1]: Starting motdgen.service... Feb 9 09:45:47.434607 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:45:47.437098 systemd[1]: Starting prepare-critools.service... Feb 9 09:45:47.438780 systemd[1]: Starting prepare-helm.service... Feb 9 09:45:47.440376 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:45:47.440865 jq[1118]: false Feb 9 09:45:47.442560 systemd[1]: Starting sshd-keygen.service... Feb 9 09:45:47.445736 systemd[1]: Starting systemd-logind.service... Feb 9 09:45:47.446559 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:47.446622 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:45:47.447021 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:45:47.447586 systemd[1]: Starting update-engine.service... Feb 9 09:45:47.449559 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:45:47.451708 dbus-daemon[1117]: [system] SELinux support is enabled Feb 9 09:45:47.452538 jq[1138]: true Feb 9 09:45:47.453389 systemd[1]: Started dbus.service. Feb 9 09:45:47.456467 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:45:47.456612 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:45:47.459977 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:45:47.460171 systemd[1]: Finished motdgen.service. Feb 9 09:45:47.462995 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:45:47.463787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:45:47.463928 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:45:47.470343 extend-filesystems[1119]: Found vda Feb 9 09:45:47.470343 extend-filesystems[1119]: Found vda1 Feb 9 09:45:47.470343 extend-filesystems[1119]: Found vda2 Feb 9 09:45:47.470343 extend-filesystems[1119]: Found vda3 Feb 9 09:45:47.473186 extend-filesystems[1119]: Found usr Feb 9 09:45:47.473186 extend-filesystems[1119]: Found vda4 Feb 9 09:45:47.473186 extend-filesystems[1119]: Found vda6 Feb 9 09:45:47.473186 extend-filesystems[1119]: Found vda7 Feb 9 09:45:47.473186 extend-filesystems[1119]: Found vda9 Feb 9 09:45:47.473186 extend-filesystems[1119]: Checking size of /dev/vda9 Feb 9 09:45:47.472772 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:45:47.478250 jq[1144]: true Feb 9 09:45:47.472801 systemd[1]: Reached target system-config.target. Feb 9 09:45:47.475952 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:45:47.475968 systemd[1]: Reached target user-config.target. Feb 9 09:45:47.479613 tar[1143]: linux-arm64/helm Feb 9 09:45:47.482209 tar[1140]: ./ Feb 9 09:45:47.482209 tar[1140]: ./loopback Feb 9 09:45:47.482607 tar[1142]: crictl Feb 9 09:45:47.513763 extend-filesystems[1119]: Resized partition /dev/vda9 Feb 9 09:45:47.518038 extend-filesystems[1168]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:45:47.527123 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 09:45:47.550501 systemd-logind[1134]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:45:47.553959 systemd-logind[1134]: New seat seat0. Feb 9 09:45:47.556611 systemd[1]: Started systemd-logind.service. Feb 9 09:45:47.560512 tar[1140]: ./bandwidth Feb 9 09:45:47.561665 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 09:45:47.566567 update_engine[1137]: I0209 09:45:47.566357 1137 main.cc:92] Flatcar Update Engine starting Feb 9 09:45:47.570466 systemd[1]: Started update-engine.service. Feb 9 09:45:47.571832 update_engine[1137]: I0209 09:45:47.571799 1137 update_check_scheduler.cc:74] Next update check in 9m47s Feb 9 09:45:47.573185 systemd[1]: Started locksmithd.service. Feb 9 09:45:47.574603 extend-filesystems[1168]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 09:45:47.574603 extend-filesystems[1168]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:45:47.574603 extend-filesystems[1168]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 09:45:47.578582 extend-filesystems[1119]: Resized filesystem in /dev/vda9 Feb 9 09:45:47.575344 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:45:47.575511 systemd[1]: Finished extend-filesystems.service. Feb 9 09:45:47.581543 bash[1165]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:45:47.586273 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:45:47.595854 env[1147]: time="2024-02-09T09:45:47.595801920Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:45:47.614809 env[1147]: time="2024-02-09T09:45:47.614774040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:45:47.614935 env[1147]: time="2024-02-09T09:45:47.614917000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:47.615685 tar[1140]: ./ptp Feb 9 09:45:47.616727 env[1147]: time="2024-02-09T09:45:47.616695920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:47.616727 env[1147]: time="2024-02-09T09:45:47.616724200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:47.616928 env[1147]: time="2024-02-09T09:45:47.616906440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:47.616972 env[1147]: time="2024-02-09T09:45:47.616927560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:47.616972 env[1147]: time="2024-02-09T09:45:47.616940360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:45:47.616972 env[1147]: time="2024-02-09T09:45:47.616950120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:47.617035 env[1147]: time="2024-02-09T09:45:47.617020960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:47.617293 env[1147]: time="2024-02-09T09:45:47.617275680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:47.617421 env[1147]: time="2024-02-09T09:45:47.617400880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:47.617421 env[1147]: time="2024-02-09T09:45:47.617419080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:45:47.617484 env[1147]: time="2024-02-09T09:45:47.617468360Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:45:47.617484 env[1147]: time="2024-02-09T09:45:47.617481760Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:45:47.625185 env[1147]: time="2024-02-09T09:45:47.625124400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:45:47.625185 env[1147]: time="2024-02-09T09:45:47.625161280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:45:47.625185 env[1147]: time="2024-02-09T09:45:47.625179080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:45:47.625275 env[1147]: time="2024-02-09T09:45:47.625208680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.625275 env[1147]: time="2024-02-09T09:45:47.625223240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.625275 env[1147]: time="2024-02-09T09:45:47.625235960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.625275 env[1147]: time="2024-02-09T09:45:47.625249160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.625585 env[1147]: time="2024-02-09T09:45:47.625568080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.625617 env[1147]: time="2024-02-09T09:45:47.625587480Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.625617 env[1147]: time="2024-02-09T09:45:47.625600360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.625617 env[1147]: time="2024-02-09T09:45:47.625612600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.625682 env[1147]: time="2024-02-09T09:45:47.625624160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:45:47.625753 env[1147]: time="2024-02-09T09:45:47.625735600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:45:47.625833 env[1147]: time="2024-02-09T09:45:47.625811080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:45:47.626034 env[1147]: time="2024-02-09T09:45:47.626018160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:45:47.626064 env[1147]: time="2024-02-09T09:45:47.626045200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626064 env[1147]: time="2024-02-09T09:45:47.626059000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:45:47.626170 env[1147]: time="2024-02-09T09:45:47.626156280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626197 env[1147]: time="2024-02-09T09:45:47.626171320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626197 env[1147]: time="2024-02-09T09:45:47.626183000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626197 env[1147]: time="2024-02-09T09:45:47.626193720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626256 env[1147]: time="2024-02-09T09:45:47.626204840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626256 env[1147]: time="2024-02-09T09:45:47.626216080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626256 env[1147]: time="2024-02-09T09:45:47.626226440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626256 env[1147]: time="2024-02-09T09:45:47.626237680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626256 env[1147]: time="2024-02-09T09:45:47.626249560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:45:47.626371 env[1147]: time="2024-02-09T09:45:47.626353800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626400 env[1147]: time="2024-02-09T09:45:47.626374040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626400 env[1147]: time="2024-02-09T09:45:47.626387480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626439 env[1147]: time="2024-02-09T09:45:47.626398440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:45:47.626439 env[1147]: time="2024-02-09T09:45:47.626411480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:45:47.626439 env[1147]: time="2024-02-09T09:45:47.626422520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:45:47.626496 env[1147]: time="2024-02-09T09:45:47.626437760Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:45:47.626496 env[1147]: time="2024-02-09T09:45:47.626473400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:45:47.626712 env[1147]: time="2024-02-09T09:45:47.626669280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:45:47.627322 env[1147]: time="2024-02-09T09:45:47.626720880Z" level=info msg="Connect containerd service" Feb 9 09:45:47.627322 env[1147]: time="2024-02-09T09:45:47.626752120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:45:47.627438 env[1147]: time="2024-02-09T09:45:47.627410800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:45:47.627706 env[1147]: time="2024-02-09T09:45:47.627689320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:45:47.627741 env[1147]: time="2024-02-09T09:45:47.627729760Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:45:47.627785 env[1147]: time="2024-02-09T09:45:47.627773400Z" level=info msg="containerd successfully booted in 0.032898s" Feb 9 09:45:47.627864 systemd[1]: Started containerd.service. Feb 9 09:45:47.634550 env[1147]: time="2024-02-09T09:45:47.634519560Z" level=info msg="Start subscribing containerd event" Feb 9 09:45:47.634603 env[1147]: time="2024-02-09T09:45:47.634562640Z" level=info msg="Start recovering state" Feb 9 09:45:47.634624 env[1147]: time="2024-02-09T09:45:47.634617320Z" level=info msg="Start event monitor" Feb 9 09:45:47.634643 env[1147]: time="2024-02-09T09:45:47.634633640Z" level=info msg="Start snapshots syncer" Feb 9 09:45:47.634677 env[1147]: time="2024-02-09T09:45:47.634644200Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:45:47.634677 env[1147]: time="2024-02-09T09:45:47.634673160Z" level=info msg="Start streaming server" Feb 9 09:45:47.659411 tar[1140]: ./vlan Feb 9 09:45:47.704817 tar[1140]: ./host-device Feb 9 09:45:47.749815 tar[1140]: ./tuning Feb 9 09:45:47.791313 tar[1140]: ./vrf Feb 9 09:45:47.835203 tar[1140]: ./sbr Feb 9 09:45:47.884853 tar[1140]: ./tap Feb 9 09:45:47.929620 tar[1140]: ./dhcp Feb 9 09:45:47.954021 tar[1143]: linux-arm64/LICENSE Feb 9 09:45:47.954106 tar[1143]: linux-arm64/README.md Feb 9 09:45:47.959086 systemd[1]: Finished prepare-helm.service. Feb 9 09:45:47.965204 systemd[1]: Finished prepare-critools.service. Feb 9 09:45:47.973003 locksmithd[1177]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:45:48.017258 tar[1140]: ./static Feb 9 09:45:48.037725 tar[1140]: ./firewall Feb 9 09:45:48.068723 tar[1140]: ./macvlan Feb 9 09:45:48.096927 tar[1140]: ./dummy Feb 9 09:45:48.124655 tar[1140]: ./bridge Feb 9 09:45:48.154914 tar[1140]: ./ipvlan Feb 9 09:45:48.182632 tar[1140]: ./portmap Feb 9 09:45:48.209054 tar[1140]: ./host-local Feb 9 09:45:48.241069 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:45:48.918355 sshd_keygen[1141]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:45:48.935591 systemd[1]: Finished sshd-keygen.service. Feb 9 09:45:48.937787 systemd[1]: Starting issuegen.service... Feb 9 09:45:48.941770 systemd-networkd[1055]: eth0: Gained IPv6LL Feb 9 09:45:48.941941 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:45:48.942076 systemd[1]: Finished issuegen.service. Feb 9 09:45:48.944089 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:45:48.949579 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:45:48.951727 systemd[1]: Started getty@tty1.service. Feb 9 09:45:48.953520 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:45:48.954494 systemd[1]: Reached target getty.target. Feb 9 09:45:48.955278 systemd[1]: Reached target multi-user.target. Feb 9 09:45:48.957084 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:45:48.963140 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:45:48.963284 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:45:48.964270 systemd[1]: Startup finished in 564ms (kernel) + 19.998s (initrd) + 4.387s (userspace) = 24.949s. Feb 9 09:45:56.967855 systemd[1]: Created slice system-sshd.slice. Feb 9 09:45:56.968882 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:39506.service. Feb 9 09:45:57.011704 sshd[1209]: Accepted publickey for core from 10.0.0.1 port 39506 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:45:57.013431 sshd[1209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:57.021558 systemd-logind[1134]: New session 1 of user core. Feb 9 09:45:57.022363 systemd[1]: Created slice user-500.slice. Feb 9 09:45:57.023341 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:45:57.030204 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:45:57.031386 systemd[1]: Starting user@500.service... Feb 9 09:45:57.033575 (systemd)[1212]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:57.086486 systemd[1212]: Queued start job for default target default.target. Feb 9 09:45:57.086907 systemd[1212]: Reached target paths.target. Feb 9 09:45:57.086925 systemd[1212]: Reached target sockets.target. Feb 9 09:45:57.086939 systemd[1212]: Reached target timers.target. Feb 9 09:45:57.086949 systemd[1212]: Reached target basic.target. Feb 9 09:45:57.086996 systemd[1212]: Reached target default.target. Feb 9 09:45:57.087018 systemd[1212]: Startup finished in 49ms. Feb 9 09:45:57.087050 systemd[1]: Started user@500.service. Feb 9 09:45:57.087883 systemd[1]: Started session-1.scope. Feb 9 09:45:57.136241 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:39518.service. Feb 9 09:45:57.176731 sshd[1221]: Accepted publickey for core from 10.0.0.1 port 39518 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:45:57.177829 sshd[1221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:57.180877 systemd-logind[1134]: New session 2 of user core. Feb 9 09:45:57.181622 systemd[1]: Started session-2.scope. Feb 9 09:45:57.236230 sshd[1221]: pam_unix(sshd:session): session closed for user core Feb 9 09:45:57.238337 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:39518.service: Deactivated successfully. Feb 9 09:45:57.238916 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:45:57.239337 systemd-logind[1134]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:45:57.240580 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:39520.service. Feb 9 09:45:57.241154 systemd-logind[1134]: Removed session 2. Feb 9 09:45:57.274293 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 39520 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:45:57.275431 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:57.278049 systemd-logind[1134]: New session 3 of user core. Feb 9 09:45:57.278780 systemd[1]: Started session-3.scope. Feb 9 09:45:57.326337 sshd[1227]: pam_unix(sshd:session): session closed for user core Feb 9 09:45:57.329935 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:39520.service: Deactivated successfully. Feb 9 09:45:57.330478 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:45:57.330986 systemd-logind[1134]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:45:57.332029 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:39526.service. Feb 9 09:45:57.332585 systemd-logind[1134]: Removed session 3. Feb 9 09:45:57.366532 sshd[1233]: Accepted publickey for core from 10.0.0.1 port 39526 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:45:57.367616 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:57.370728 systemd-logind[1134]: New session 4 of user core. Feb 9 09:45:57.371491 systemd[1]: Started session-4.scope. Feb 9 09:45:57.424485 sshd[1233]: pam_unix(sshd:session): session closed for user core Feb 9 09:45:57.427890 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:39526.service: Deactivated successfully. Feb 9 09:45:57.428427 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:45:57.428897 systemd-logind[1134]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:45:57.429890 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:39532.service. Feb 9 09:45:57.430448 systemd-logind[1134]: Removed session 4. Feb 9 09:45:57.464042 sshd[1239]: Accepted publickey for core from 10.0.0.1 port 39532 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:45:57.465035 sshd[1239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:57.467927 systemd-logind[1134]: New session 5 of user core. Feb 9 09:45:57.468636 systemd[1]: Started session-5.scope. Feb 9 09:45:57.523844 sudo[1242]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:45:57.524026 sudo[1242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:45:58.343160 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:45:58.348335 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:45:58.348614 systemd[1]: Reached target network-online.target. Feb 9 09:45:58.349876 systemd[1]: Starting docker.service... Feb 9 09:45:58.429032 env[1259]: time="2024-02-09T09:45:58.428981254Z" level=info msg="Starting up" Feb 9 09:45:58.430513 env[1259]: time="2024-02-09T09:45:58.430413116Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:45:58.430513 env[1259]: time="2024-02-09T09:45:58.430506908Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:45:58.430582 env[1259]: time="2024-02-09T09:45:58.430530177Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:45:58.430582 env[1259]: time="2024-02-09T09:45:58.430540399Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:45:58.432424 env[1259]: time="2024-02-09T09:45:58.432399258Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:45:58.432424 env[1259]: time="2024-02-09T09:45:58.432421334Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:45:58.432520 env[1259]: time="2024-02-09T09:45:58.432436648Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:45:58.432520 env[1259]: time="2024-02-09T09:45:58.432446433Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:45:58.452201 env[1259]: time="2024-02-09T09:45:58.452141476Z" level=info msg="Loading containers: start." Feb 9 09:45:58.539667 kernel: Initializing XFRM netlink socket Feb 9 09:45:58.560796 env[1259]: time="2024-02-09T09:45:58.560762512Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:45:58.615118 systemd-networkd[1055]: docker0: Link UP Feb 9 09:45:58.623156 env[1259]: time="2024-02-09T09:45:58.623132276Z" level=info msg="Loading containers: done." Feb 9 09:45:58.639526 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2886338335-merged.mount: Deactivated successfully. Feb 9 09:45:58.642842 env[1259]: time="2024-02-09T09:45:58.642803572Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:45:58.642985 env[1259]: time="2024-02-09T09:45:58.642967768Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:45:58.643075 env[1259]: time="2024-02-09T09:45:58.643061998Z" level=info msg="Daemon has completed initialization" Feb 9 09:45:58.656555 systemd[1]: Started docker.service. Feb 9 09:45:58.661387 env[1259]: time="2024-02-09T09:45:58.661331145Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:45:58.677128 systemd[1]: Reloading. Feb 9 09:45:58.717554 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2024-02-09T09:45:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:45:58.717832 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2024-02-09T09:45:58Z" level=info msg="torcx already run" Feb 9 09:45:58.764526 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:45:58.764544 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:45:58.781070 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:45:58.836488 systemd[1]: Started kubelet.service. Feb 9 09:45:58.947623 kubelet[1437]: E0209 09:45:58.947507 1437 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 09:45:58.950032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:45:58.950166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:45:59.204002 env[1147]: time="2024-02-09T09:45:59.203901815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 09:45:59.850105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3529479149.mount: Deactivated successfully. Feb 9 09:46:01.531871 env[1147]: time="2024-02-09T09:46:01.531816033Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:01.532961 env[1147]: time="2024-02-09T09:46:01.532931595Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:01.534537 env[1147]: time="2024-02-09T09:46:01.534479092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:01.536619 env[1147]: time="2024-02-09T09:46:01.536581859Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:01.537476 env[1147]: time="2024-02-09T09:46:01.537426122Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb\"" Feb 9 09:46:01.546539 env[1147]: time="2024-02-09T09:46:01.546505635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 09:46:03.693706 env[1147]: time="2024-02-09T09:46:03.693646213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:03.694983 env[1147]: time="2024-02-09T09:46:03.694962265Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:03.696717 env[1147]: time="2024-02-09T09:46:03.696689254Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:03.698567 env[1147]: time="2024-02-09T09:46:03.698539408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:03.699231 env[1147]: time="2024-02-09T09:46:03.699187422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f\"" Feb 9 09:46:03.708487 env[1147]: time="2024-02-09T09:46:03.708448723Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 09:46:04.904965 env[1147]: time="2024-02-09T09:46:04.904908171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:04.906129 env[1147]: time="2024-02-09T09:46:04.906100808Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:04.907617 env[1147]: time="2024-02-09T09:46:04.907591336Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:04.909871 env[1147]: time="2024-02-09T09:46:04.909835525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:04.910499 env[1147]: time="2024-02-09T09:46:04.910457120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663\"" Feb 9 09:46:04.918808 env[1147]: time="2024-02-09T09:46:04.918774338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 09:46:05.985329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214968222.mount: Deactivated successfully. Feb 9 09:46:06.373498 env[1147]: time="2024-02-09T09:46:06.373396336Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.374942 env[1147]: time="2024-02-09T09:46:06.374906227Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.376287 env[1147]: time="2024-02-09T09:46:06.376261416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.377354 env[1147]: time="2024-02-09T09:46:06.377320656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.377813 env[1147]: time="2024-02-09T09:46:06.377781129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 9 09:46:06.387083 env[1147]: time="2024-02-09T09:46:06.387053185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:46:06.870814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863591655.mount: Deactivated successfully. Feb 9 09:46:06.874968 env[1147]: time="2024-02-09T09:46:06.874897584Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.877738 env[1147]: time="2024-02-09T09:46:06.877699625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.879259 env[1147]: time="2024-02-09T09:46:06.879225166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.880434 env[1147]: time="2024-02-09T09:46:06.880407488Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.881089 env[1147]: time="2024-02-09T09:46:06.881059273Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:46:06.889298 env[1147]: time="2024-02-09T09:46:06.889275962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 09:46:07.478153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724261733.mount: Deactivated successfully. Feb 9 09:46:09.200961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:46:09.201138 systemd[1]: Stopped kubelet.service. Feb 9 09:46:09.202533 systemd[1]: Started kubelet.service. Feb 9 09:46:09.244764 kubelet[1493]: E0209 09:46:09.244710 1493 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 09:46:09.247618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:46:09.247764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:46:10.565617 env[1147]: time="2024-02-09T09:46:10.565568342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:10.567304 env[1147]: time="2024-02-09T09:46:10.567271860Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:10.569659 env[1147]: time="2024-02-09T09:46:10.569617613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:10.571544 env[1147]: time="2024-02-09T09:46:10.571516550Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:10.572424 env[1147]: time="2024-02-09T09:46:10.572394719Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace\"" Feb 9 09:46:10.583309 env[1147]: time="2024-02-09T09:46:10.583281234Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 09:46:11.153397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3499610069.mount: Deactivated successfully. Feb 9 09:46:13.297297 env[1147]: time="2024-02-09T09:46:13.297229080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:13.299682 env[1147]: time="2024-02-09T09:46:13.299641577Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:13.301211 env[1147]: time="2024-02-09T09:46:13.301170062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:13.302793 env[1147]: time="2024-02-09T09:46:13.302763099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:13.303352 env[1147]: time="2024-02-09T09:46:13.303308007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 09:46:18.658324 systemd[1]: Stopped kubelet.service. Feb 9 09:46:18.672109 systemd[1]: Reloading. Feb 9 09:46:18.722147 /usr/lib/systemd/system-generators/torcx-generator[1604]: time="2024-02-09T09:46:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:18.722176 /usr/lib/systemd/system-generators/torcx-generator[1604]: time="2024-02-09T09:46:18Z" level=info msg="torcx already run" Feb 9 09:46:18.772173 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:18.772193 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:18.788564 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:18.849435 systemd[1]: Started kubelet.service. Feb 9 09:46:18.888202 kubelet[1642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:18.888202 kubelet[1642]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:46:18.888202 kubelet[1642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:18.888531 kubelet[1642]: I0209 09:46:18.888298 1642 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:46:19.985726 kubelet[1642]: I0209 09:46:19.985690 1642 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 09:46:19.985726 kubelet[1642]: I0209 09:46:19.985722 1642 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:46:19.986070 kubelet[1642]: I0209 09:46:19.985923 1642 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 09:46:19.990812 kubelet[1642]: I0209 09:46:19.990788 1642 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:46:19.991532 kubelet[1642]: E0209 09:46:19.991515 1642 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:19.995520 kubelet[1642]: W0209 09:46:19.995496 1642 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:46:19.996520 kubelet[1642]: I0209 09:46:19.996482 1642 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:46:19.996706 kubelet[1642]: I0209 09:46:19.996689 1642 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:46:19.996948 kubelet[1642]: I0209 09:46:19.996924 1642 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 09:46:19.996948 kubelet[1642]: I0209 09:46:19.996950 1642 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 09:46:19.997065 kubelet[1642]: I0209 09:46:19.996960 1642 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 09:46:19.997065 kubelet[1642]: I0209 09:46:19.997057 1642 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:19.997330 kubelet[1642]: I0209 09:46:19.997308 1642 kubelet.go:393] "Attempting to sync node with API server" Feb 9 09:46:19.997404 kubelet[1642]: I0209 09:46:19.997394 1642 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:46:19.997466 kubelet[1642]: I0209 09:46:19.997457 1642 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:46:19.997626 kubelet[1642]: I0209 09:46:19.997614 1642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:46:19.998232 kubelet[1642]: W0209 09:46:19.998182 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:19.998232 kubelet[1642]: E0209 09:46:19.998234 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:19.998326 kubelet[1642]: W0209 09:46:19.998248 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:19.998326 kubelet[1642]: E0209 09:46:19.998289 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:19.998688 kubelet[1642]: I0209 09:46:19.998643 1642 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:46:19.999068 kubelet[1642]: W0209 09:46:19.999056 1642 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:46:19.999753 kubelet[1642]: I0209 09:46:19.999731 1642 server.go:1232] "Started kubelet" Feb 9 09:46:20.000145 kubelet[1642]: I0209 09:46:20.000128 1642 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:46:20.000449 kubelet[1642]: E0209 09:46:20.000420 1642 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:46:20.000449 kubelet[1642]: E0209 09:46:20.000447 1642 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:46:20.000638 kubelet[1642]: I0209 09:46:20.000623 1642 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 09:46:20.000769 kubelet[1642]: I0209 09:46:20.000757 1642 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:46:20.000855 kubelet[1642]: E0209 09:46:20.000759 1642 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228b81fd0c635", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 46, 19, 999708725, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 46, 19, 999708725, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.20:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.20:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:46:20.001563 kubelet[1642]: I0209 09:46:20.001542 1642 server.go:462] "Adding debug handlers to kubelet server" Feb 9 09:46:20.001912 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:46:20.002435 kubelet[1642]: I0209 09:46:20.002402 1642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:46:20.002752 kubelet[1642]: E0209 09:46:20.002735 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:20.002867 kubelet[1642]: I0209 09:46:20.002853 1642 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 09:46:20.003005 kubelet[1642]: I0209 09:46:20.002991 1642 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:46:20.003118 kubelet[1642]: I0209 09:46:20.003107 1642 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 09:46:20.003516 kubelet[1642]: W0209 09:46:20.003481 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:20.003619 kubelet[1642]: E0209 09:46:20.003607 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:20.003782 kubelet[1642]: E0209 09:46:20.003753 1642 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Feb 9 09:46:20.014844 kubelet[1642]: I0209 09:46:20.014814 1642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 09:46:20.015667 kubelet[1642]: I0209 09:46:20.015631 1642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 09:46:20.015716 kubelet[1642]: I0209 09:46:20.015680 1642 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 09:46:20.015716 kubelet[1642]: I0209 09:46:20.015698 1642 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 09:46:20.015790 kubelet[1642]: E0209 09:46:20.015749 1642 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:46:20.020502 kubelet[1642]: W0209 09:46:20.020456 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:20.020587 kubelet[1642]: E0209 09:46:20.020508 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:20.021433 kubelet[1642]: I0209 09:46:20.021399 1642 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:46:20.021433 kubelet[1642]: I0209 09:46:20.021435 1642 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:46:20.021522 kubelet[1642]: I0209 09:46:20.021450 1642 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:20.023418 kubelet[1642]: I0209 09:46:20.023400 1642 policy_none.go:49] "None policy: Start" Feb 9 09:46:20.024008 kubelet[1642]: I0209 09:46:20.023990 1642 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:46:20.024059 kubelet[1642]: I0209 09:46:20.024018 1642 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:46:20.028377 systemd[1]: Created slice kubepods.slice. Feb 9 09:46:20.031935 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:46:20.034143 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:46:20.043215 kubelet[1642]: I0209 09:46:20.043196 1642 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:46:20.043832 kubelet[1642]: I0209 09:46:20.043807 1642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:46:20.043919 kubelet[1642]: E0209 09:46:20.043909 1642 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 09:46:20.104593 kubelet[1642]: I0209 09:46:20.104571 1642 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:46:20.104949 kubelet[1642]: E0209 09:46:20.104933 1642 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Feb 9 09:46:20.116171 kubelet[1642]: I0209 09:46:20.116152 1642 topology_manager.go:215] "Topology Admit Handler" podUID="725cfeae22ee4ed85d1a00139308d0b2" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 09:46:20.117072 kubelet[1642]: I0209 09:46:20.117052 1642 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 09:46:20.118644 kubelet[1642]: I0209 09:46:20.118624 1642 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 09:46:20.121953 systemd[1]: Created slice kubepods-burstable-pod725cfeae22ee4ed85d1a00139308d0b2.slice. Feb 9 09:46:20.133995 systemd[1]: Created slice kubepods-burstable-pod212dcc5e2f08bec92c239ac5786b7e2b.slice. Feb 9 09:46:20.138703 systemd[1]: Created slice kubepods-burstable-podd0325d16aab19669b5fea4b6623890e6.slice. Feb 9 09:46:20.204355 kubelet[1642]: E0209 09:46:20.204330 1642 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Feb 9 09:46:20.305048 kubelet[1642]: I0209 09:46:20.304959 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/725cfeae22ee4ed85d1a00139308d0b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"725cfeae22ee4ed85d1a00139308d0b2\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:46:20.305048 kubelet[1642]: I0209 09:46:20.305016 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:20.305139 kubelet[1642]: I0209 09:46:20.305052 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:20.305139 kubelet[1642]: I0209 09:46:20.305082 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:46:20.305139 kubelet[1642]: I0209 09:46:20.305100 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/725cfeae22ee4ed85d1a00139308d0b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"725cfeae22ee4ed85d1a00139308d0b2\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:46:20.305139 kubelet[1642]: I0209 09:46:20.305117 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/725cfeae22ee4ed85d1a00139308d0b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"725cfeae22ee4ed85d1a00139308d0b2\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:46:20.305139 kubelet[1642]: I0209 09:46:20.305135 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:20.305248 kubelet[1642]: I0209 09:46:20.305155 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:20.305248 kubelet[1642]: I0209 09:46:20.305174 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:20.307368 kubelet[1642]: I0209 09:46:20.307335 1642 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:46:20.307614 kubelet[1642]: E0209 09:46:20.307596 1642 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Feb 9 09:46:20.434258 kubelet[1642]: E0209 09:46:20.434215 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:20.435065 env[1147]: time="2024-02-09T09:46:20.434764521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:725cfeae22ee4ed85d1a00139308d0b2,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:20.437857 kubelet[1642]: E0209 09:46:20.437826 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:20.438185 env[1147]: time="2024-02-09T09:46:20.438147839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:20.440786 kubelet[1642]: E0209 09:46:20.440769 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:20.441268 env[1147]: time="2024-02-09T09:46:20.441114273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:20.468976 kubelet[1642]: E0209 09:46:20.468889 1642 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b228b81fd0c635", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 46, 19, 999708725, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 46, 19, 999708725, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.20:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.20:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:46:20.605078 kubelet[1642]: E0209 09:46:20.604989 1642 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Feb 9 09:46:20.709800 kubelet[1642]: I0209 09:46:20.709769 1642 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:46:20.710311 kubelet[1642]: E0209 09:46:20.710276 1642 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Feb 9 09:46:20.901103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682884354.mount: Deactivated successfully. Feb 9 09:46:20.905315 env[1147]: time="2024-02-09T09:46:20.905275692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.906094 env[1147]: time="2024-02-09T09:46:20.906070981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.906715 env[1147]: time="2024-02-09T09:46:20.906695948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.908509 env[1147]: time="2024-02-09T09:46:20.908475009Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.910253 env[1147]: time="2024-02-09T09:46:20.910224349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.911574 kubelet[1642]: W0209 09:46:20.911497 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:20.911574 kubelet[1642]: E0209 09:46:20.911558 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:20.913032 env[1147]: time="2024-02-09T09:46:20.913005940Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.915193 env[1147]: time="2024-02-09T09:46:20.915158605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.915921 env[1147]: time="2024-02-09T09:46:20.915888533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.916928 kubelet[1642]: W0209 09:46:20.916885 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:20.917044 kubelet[1642]: E0209 09:46:20.916936 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:20.921064 env[1147]: time="2024-02-09T09:46:20.920698388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.922955 env[1147]: time="2024-02-09T09:46:20.922924814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.923572 env[1147]: time="2024-02-09T09:46:20.923545381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.924240 env[1147]: time="2024-02-09T09:46:20.924192708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.959020 env[1147]: time="2024-02-09T09:46:20.958938785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:20.959020 env[1147]: time="2024-02-09T09:46:20.958995345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:20.959245 env[1147]: time="2024-02-09T09:46:20.959203428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:20.959283 env[1147]: time="2024-02-09T09:46:20.959189988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:20.959309 env[1147]: time="2024-02-09T09:46:20.959275789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:20.959331 env[1147]: time="2024-02-09T09:46:20.959308029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:20.959533 env[1147]: time="2024-02-09T09:46:20.959500391Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/386cf5073552e8e289d356690b228c4f6ff5d117932b78986ee946b37420b213 pid=1697 runtime=io.containerd.runc.v2 Feb 9 09:46:20.959628 env[1147]: time="2024-02-09T09:46:20.959521511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/462ba90e72c69456b47c7ace2edd38c12f76394f36bcd237a29bb2a914d4ca62 pid=1696 runtime=io.containerd.runc.v2 Feb 9 09:46:20.961281 env[1147]: time="2024-02-09T09:46:20.961201211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:20.961281 env[1147]: time="2024-02-09T09:46:20.961236211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:20.961281 env[1147]: time="2024-02-09T09:46:20.961246011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:20.961398 env[1147]: time="2024-02-09T09:46:20.961357892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/265ba829ccea21e8d9321257a7ae38c161b23b777b4727cbfbbb153dc4511493 pid=1707 runtime=io.containerd.runc.v2 Feb 9 09:46:20.971307 systemd[1]: Started cri-containerd-462ba90e72c69456b47c7ace2edd38c12f76394f36bcd237a29bb2a914d4ca62.scope. Feb 9 09:46:20.977209 systemd[1]: Started cri-containerd-265ba829ccea21e8d9321257a7ae38c161b23b777b4727cbfbbb153dc4511493.scope. Feb 9 09:46:20.978163 systemd[1]: Started cri-containerd-386cf5073552e8e289d356690b228c4f6ff5d117932b78986ee946b37420b213.scope. Feb 9 09:46:21.046895 env[1147]: time="2024-02-09T09:46:21.040789855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"462ba90e72c69456b47c7ace2edd38c12f76394f36bcd237a29bb2a914d4ca62\"" Feb 9 09:46:21.047803 kubelet[1642]: E0209 09:46:21.047618 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:21.050258 env[1147]: time="2024-02-09T09:46:21.050227877Z" level=info msg="CreateContainer within sandbox \"462ba90e72c69456b47c7ace2edd38c12f76394f36bcd237a29bb2a914d4ca62\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:46:21.052934 env[1147]: time="2024-02-09T09:46:21.052896626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:725cfeae22ee4ed85d1a00139308d0b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"265ba829ccea21e8d9321257a7ae38c161b23b777b4727cbfbbb153dc4511493\"" Feb 9 09:46:21.053498 kubelet[1642]: E0209 09:46:21.053473 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:21.060419 env[1147]: time="2024-02-09T09:46:21.060102743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"386cf5073552e8e289d356690b228c4f6ff5d117932b78986ee946b37420b213\"" Feb 9 09:46:21.060578 kubelet[1642]: E0209 09:46:21.060562 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:21.061075 env[1147]: time="2024-02-09T09:46:21.061045914Z" level=info msg="CreateContainer within sandbox \"265ba829ccea21e8d9321257a7ae38c161b23b777b4727cbfbbb153dc4511493\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:46:21.066232 env[1147]: time="2024-02-09T09:46:21.066180849Z" level=info msg="CreateContainer within sandbox \"386cf5073552e8e289d356690b228c4f6ff5d117932b78986ee946b37420b213\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:46:21.071948 env[1147]: time="2024-02-09T09:46:21.071913111Z" level=info msg="CreateContainer within sandbox \"462ba90e72c69456b47c7ace2edd38c12f76394f36bcd237a29bb2a914d4ca62\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b332c9a28389047c5a3ead2dd88af999c22dcaa444ddbb18ed0b81af8f336739\"" Feb 9 09:46:21.072673 env[1147]: time="2024-02-09T09:46:21.072633798Z" level=info msg="StartContainer for \"b332c9a28389047c5a3ead2dd88af999c22dcaa444ddbb18ed0b81af8f336739\"" Feb 9 09:46:21.078472 env[1147]: time="2024-02-09T09:46:21.078426461Z" level=info msg="CreateContainer within sandbox \"265ba829ccea21e8d9321257a7ae38c161b23b777b4727cbfbbb153dc4511493\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8416280234433c22e28dffe42eef56096b06fe4cdcf474aa0e425726cfe27b5e\"" Feb 9 09:46:21.078948 env[1147]: time="2024-02-09T09:46:21.078918946Z" level=info msg="StartContainer for \"8416280234433c22e28dffe42eef56096b06fe4cdcf474aa0e425726cfe27b5e\"" Feb 9 09:46:21.080890 env[1147]: time="2024-02-09T09:46:21.080832647Z" level=info msg="CreateContainer within sandbox \"386cf5073552e8e289d356690b228c4f6ff5d117932b78986ee946b37420b213\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e32f0dfa9c1da67f906e969897719ebf402230b480f68005255ab0e7bf13607d\"" Feb 9 09:46:21.081226 env[1147]: time="2024-02-09T09:46:21.081195131Z" level=info msg="StartContainer for \"e32f0dfa9c1da67f906e969897719ebf402230b480f68005255ab0e7bf13607d\"" Feb 9 09:46:21.091810 systemd[1]: Started cri-containerd-b332c9a28389047c5a3ead2dd88af999c22dcaa444ddbb18ed0b81af8f336739.scope. Feb 9 09:46:21.096693 systemd[1]: Started cri-containerd-e32f0dfa9c1da67f906e969897719ebf402230b480f68005255ab0e7bf13607d.scope. Feb 9 09:46:21.118525 systemd[1]: Started cri-containerd-8416280234433c22e28dffe42eef56096b06fe4cdcf474aa0e425726cfe27b5e.scope. Feb 9 09:46:21.163047 env[1147]: time="2024-02-09T09:46:21.162947252Z" level=info msg="StartContainer for \"b332c9a28389047c5a3ead2dd88af999c22dcaa444ddbb18ed0b81af8f336739\" returns successfully" Feb 9 09:46:21.190592 env[1147]: time="2024-02-09T09:46:21.190542350Z" level=info msg="StartContainer for \"e32f0dfa9c1da67f906e969897719ebf402230b480f68005255ab0e7bf13607d\" returns successfully" Feb 9 09:46:21.192927 env[1147]: time="2024-02-09T09:46:21.192866175Z" level=info msg="StartContainer for \"8416280234433c22e28dffe42eef56096b06fe4cdcf474aa0e425726cfe27b5e\" returns successfully" Feb 9 09:46:21.197242 kubelet[1642]: W0209 09:46:21.197178 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:21.197242 kubelet[1642]: E0209 09:46:21.197248 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:21.210628 kubelet[1642]: W0209 09:46:21.210576 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:21.210628 kubelet[1642]: E0209 09:46:21.210630 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Feb 9 09:46:21.511919 kubelet[1642]: I0209 09:46:21.511889 1642 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:46:22.027337 kubelet[1642]: E0209 09:46:22.027307 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:22.030178 kubelet[1642]: E0209 09:46:22.030118 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:22.031983 kubelet[1642]: E0209 09:46:22.031954 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:22.664755 kubelet[1642]: E0209 09:46:22.664689 1642 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 09:46:22.747463 kubelet[1642]: I0209 09:46:22.747415 1642 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:46:22.756163 kubelet[1642]: E0209 09:46:22.756135 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:22.856306 kubelet[1642]: E0209 09:46:22.856260 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:22.956672 kubelet[1642]: E0209 09:46:22.956630 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.033978 kubelet[1642]: E0209 09:46:23.033950 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:23.057023 kubelet[1642]: E0209 09:46:23.056995 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.157415 kubelet[1642]: E0209 09:46:23.157388 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.257963 kubelet[1642]: E0209 09:46:23.257858 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.358436 kubelet[1642]: E0209 09:46:23.358410 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.458974 kubelet[1642]: E0209 09:46:23.458929 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.559699 kubelet[1642]: E0209 09:46:23.559564 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.660127 kubelet[1642]: E0209 09:46:23.660093 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.760642 kubelet[1642]: E0209 09:46:23.760602 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:23.861142 kubelet[1642]: E0209 09:46:23.861057 1642 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 09:46:24.000229 kubelet[1642]: I0209 09:46:24.000173 1642 apiserver.go:52] "Watching apiserver" Feb 9 09:46:24.004160 kubelet[1642]: I0209 09:46:24.004127 1642 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:46:25.425505 systemd[1]: Reloading. Feb 9 09:46:25.461805 /usr/lib/systemd/system-generators/torcx-generator[1936]: time="2024-02-09T09:46:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:25.462186 /usr/lib/systemd/system-generators/torcx-generator[1936]: time="2024-02-09T09:46:25Z" level=info msg="torcx already run" Feb 9 09:46:25.525791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:25.525812 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:25.542827 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:25.619877 systemd[1]: Stopping kubelet.service... Feb 9 09:46:25.639015 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:46:25.639217 systemd[1]: Stopped kubelet.service. Feb 9 09:46:25.639263 systemd[1]: kubelet.service: Consumed 1.382s CPU time. Feb 9 09:46:25.640840 systemd[1]: Started kubelet.service. Feb 9 09:46:25.710592 kubelet[1974]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:25.711135 kubelet[1974]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:46:25.711196 kubelet[1974]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:25.711319 kubelet[1974]: I0209 09:46:25.711288 1974 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:46:25.715804 kubelet[1974]: I0209 09:46:25.715772 1974 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 09:46:25.715804 kubelet[1974]: I0209 09:46:25.715798 1974 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:46:25.715983 kubelet[1974]: I0209 09:46:25.715966 1974 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 09:46:25.717486 kubelet[1974]: I0209 09:46:25.717456 1974 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:46:25.718564 kubelet[1974]: I0209 09:46:25.718541 1974 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:46:25.722906 kubelet[1974]: W0209 09:46:25.722879 1974 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:46:25.723553 kubelet[1974]: I0209 09:46:25.723537 1974 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:46:25.723758 kubelet[1974]: I0209 09:46:25.723736 1974 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:46:25.723932 kubelet[1974]: I0209 09:46:25.723877 1974 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 09:46:25.724007 kubelet[1974]: I0209 09:46:25.723953 1974 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 09:46:25.724007 kubelet[1974]: I0209 09:46:25.723963 1974 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 09:46:25.724007 kubelet[1974]: I0209 09:46:25.723988 1974 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:25.724077 kubelet[1974]: I0209 09:46:25.724072 1974 kubelet.go:393] "Attempting to sync node with API server" Feb 9 09:46:25.724098 kubelet[1974]: I0209 09:46:25.724085 1974 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:46:25.724120 kubelet[1974]: I0209 09:46:25.724103 1974 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:46:25.724120 kubelet[1974]: I0209 09:46:25.724116 1974 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:46:25.726996 kubelet[1974]: I0209 09:46:25.726969 1974 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:46:25.727735 kubelet[1974]: I0209 09:46:25.727714 1974 server.go:1232] "Started kubelet" Feb 9 09:46:25.728209 kubelet[1974]: I0209 09:46:25.728186 1974 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:46:25.728489 kubelet[1974]: I0209 09:46:25.728471 1974 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:46:25.728962 kubelet[1974]: I0209 09:46:25.728941 1974 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 09:46:25.729927 kubelet[1974]: I0209 09:46:25.729906 1974 server.go:462] "Adding debug handlers to kubelet server" Feb 9 09:46:25.730378 kubelet[1974]: I0209 09:46:25.730361 1974 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:46:25.731345 kubelet[1974]: I0209 09:46:25.731327 1974 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 09:46:25.731560 kubelet[1974]: I0209 09:46:25.731547 1974 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 09:46:25.731799 kubelet[1974]: I0209 09:46:25.731782 1974 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:46:25.736657 kubelet[1974]: E0209 09:46:25.736608 1974 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:46:25.736734 kubelet[1974]: E0209 09:46:25.736688 1974 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:46:25.764307 kubelet[1974]: I0209 09:46:25.764280 1974 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 09:46:25.765277 kubelet[1974]: I0209 09:46:25.765247 1974 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 09:46:25.765277 kubelet[1974]: I0209 09:46:25.765280 1974 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 09:46:25.765372 kubelet[1974]: I0209 09:46:25.765294 1974 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 09:46:25.765372 kubelet[1974]: E0209 09:46:25.765338 1974 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:46:25.789263 kubelet[1974]: I0209 09:46:25.789225 1974 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:46:25.789263 kubelet[1974]: I0209 09:46:25.789262 1974 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:46:25.789367 kubelet[1974]: I0209 09:46:25.789280 1974 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:25.789418 kubelet[1974]: I0209 09:46:25.789405 1974 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:46:25.789450 kubelet[1974]: I0209 09:46:25.789427 1974 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 09:46:25.789450 kubelet[1974]: I0209 09:46:25.789435 1974 policy_none.go:49] "None policy: Start" Feb 9 09:46:25.790100 kubelet[1974]: I0209 09:46:25.790054 1974 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:46:25.790172 kubelet[1974]: I0209 09:46:25.790108 1974 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:46:25.790283 kubelet[1974]: I0209 09:46:25.790267 1974 state_mem.go:75] "Updated machine memory state" Feb 9 09:46:25.793384 kubelet[1974]: I0209 09:46:25.793353 1974 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:46:25.793593 kubelet[1974]: I0209 09:46:25.793567 1974 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:46:25.834703 kubelet[1974]: I0209 09:46:25.834680 1974 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:46:25.841573 kubelet[1974]: I0209 09:46:25.841553 1974 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 09:46:25.841747 kubelet[1974]: I0209 09:46:25.841735 1974 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:46:25.865827 kubelet[1974]: I0209 09:46:25.865792 1974 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 09:46:25.865922 kubelet[1974]: I0209 09:46:25.865907 1974 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 09:46:25.865972 kubelet[1974]: I0209 09:46:25.865945 1974 topology_manager.go:215] "Topology Admit Handler" podUID="725cfeae22ee4ed85d1a00139308d0b2" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 09:46:25.933160 kubelet[1974]: I0209 09:46:25.933127 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:25.933305 kubelet[1974]: I0209 09:46:25.933180 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:25.933305 kubelet[1974]: I0209 09:46:25.933242 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/725cfeae22ee4ed85d1a00139308d0b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"725cfeae22ee4ed85d1a00139308d0b2\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:46:25.933305 kubelet[1974]: I0209 09:46:25.933282 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:25.933384 kubelet[1974]: I0209 09:46:25.933309 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:25.933384 kubelet[1974]: I0209 09:46:25.933357 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:46:25.933443 kubelet[1974]: I0209 09:46:25.933408 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:46:25.933443 kubelet[1974]: I0209 09:46:25.933440 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/725cfeae22ee4ed85d1a00139308d0b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"725cfeae22ee4ed85d1a00139308d0b2\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:46:25.933494 kubelet[1974]: I0209 09:46:25.933474 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/725cfeae22ee4ed85d1a00139308d0b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"725cfeae22ee4ed85d1a00139308d0b2\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:46:26.176294 kubelet[1974]: E0209 09:46:26.176262 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:26.181196 kubelet[1974]: E0209 09:46:26.181170 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:26.181800 kubelet[1974]: E0209 09:46:26.181785 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:26.724708 kubelet[1974]: I0209 09:46:26.724675 1974 apiserver.go:52] "Watching apiserver" Feb 9 09:46:26.733318 kubelet[1974]: I0209 09:46:26.733285 1974 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:46:26.772419 kubelet[1974]: E0209 09:46:26.772391 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:26.772521 kubelet[1974]: E0209 09:46:26.772426 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:26.777589 kubelet[1974]: E0209 09:46:26.777570 1974 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 09:46:26.778177 kubelet[1974]: E0209 09:46:26.778156 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:26.800454 kubelet[1974]: I0209 09:46:26.800417 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.800368271 podCreationTimestamp="2024-02-09 09:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:26.793222853 +0000 UTC m=+1.148633735" watchObservedRunningTime="2024-02-09 09:46:26.800368271 +0000 UTC m=+1.155779153" Feb 9 09:46:26.809340 kubelet[1974]: I0209 09:46:26.809314 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.809273744 podCreationTimestamp="2024-02-09 09:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:26.80884422 +0000 UTC m=+1.164255102" watchObservedRunningTime="2024-02-09 09:46:26.809273744 +0000 UTC m=+1.164684626" Feb 9 09:46:26.809571 kubelet[1974]: I0209 09:46:26.809548 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.809529546 podCreationTimestamp="2024-02-09 09:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:26.800833835 +0000 UTC m=+1.156244717" watchObservedRunningTime="2024-02-09 09:46:26.809529546 +0000 UTC m=+1.164940428" Feb 9 09:46:27.028633 sudo[1242]: pam_unix(sudo:session): session closed for user root Feb 9 09:46:27.030851 sshd[1239]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:27.033180 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:39532.service: Deactivated successfully. Feb 9 09:46:27.034026 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:46:27.034205 systemd[1]: session-5.scope: Consumed 6.284s CPU time. Feb 9 09:46:27.034593 systemd-logind[1134]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:46:27.035338 systemd-logind[1134]: Removed session 5. Feb 9 09:46:27.773676 kubelet[1974]: E0209 09:46:27.773630 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:27.919578 kubelet[1974]: E0209 09:46:27.919533 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:27.995212 kubelet[1974]: E0209 09:46:27.995188 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:33.018125 update_engine[1137]: I0209 09:46:33.017704 1137 update_attempter.cc:509] Updating boot flags... Feb 9 09:46:34.874417 kubelet[1974]: E0209 09:46:34.874225 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:35.786625 kubelet[1974]: E0209 09:46:35.786549 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:37.928238 kubelet[1974]: E0209 09:46:37.928202 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:38.002305 kubelet[1974]: E0209 09:46:38.002274 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:38.804812 kubelet[1974]: I0209 09:46:38.804779 1974 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:46:38.805133 env[1147]: time="2024-02-09T09:46:38.805089686Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:46:38.805376 kubelet[1974]: I0209 09:46:38.805250 1974 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:46:39.609810 kubelet[1974]: I0209 09:46:39.609726 1974 topology_manager.go:215] "Topology Admit Handler" podUID="66b93822-531c-4a44-8408-e31e04ff077c" podNamespace="kube-system" podName="kube-proxy-qf7bh" Feb 9 09:46:39.615558 systemd[1]: Created slice kubepods-besteffort-pod66b93822_531c_4a44_8408_e31e04ff077c.slice. Feb 9 09:46:39.618490 kubelet[1974]: I0209 09:46:39.618239 1974 topology_manager.go:215] "Topology Admit Handler" podUID="702a4982-c3f9-482e-a2d6-627317f283fe" podNamespace="kube-flannel" podName="kube-flannel-ds-7p8gz" Feb 9 09:46:39.626716 kubelet[1974]: I0209 09:46:39.626678 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/702a4982-c3f9-482e-a2d6-627317f283fe-flannel-cfg\") pod \"kube-flannel-ds-7p8gz\" (UID: \"702a4982-c3f9-482e-a2d6-627317f283fe\") " pod="kube-flannel/kube-flannel-ds-7p8gz" Feb 9 09:46:39.626817 kubelet[1974]: I0209 09:46:39.626729 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vkg8\" (UniqueName: \"kubernetes.io/projected/66b93822-531c-4a44-8408-e31e04ff077c-kube-api-access-5vkg8\") pod \"kube-proxy-qf7bh\" (UID: \"66b93822-531c-4a44-8408-e31e04ff077c\") " pod="kube-system/kube-proxy-qf7bh" Feb 9 09:46:39.626817 kubelet[1974]: I0209 09:46:39.626757 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/702a4982-c3f9-482e-a2d6-627317f283fe-run\") pod \"kube-flannel-ds-7p8gz\" (UID: \"702a4982-c3f9-482e-a2d6-627317f283fe\") " pod="kube-flannel/kube-flannel-ds-7p8gz" Feb 9 09:46:39.626817 kubelet[1974]: I0209 09:46:39.626777 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbw9n\" (UniqueName: \"kubernetes.io/projected/702a4982-c3f9-482e-a2d6-627317f283fe-kube-api-access-mbw9n\") pod \"kube-flannel-ds-7p8gz\" (UID: \"702a4982-c3f9-482e-a2d6-627317f283fe\") " pod="kube-flannel/kube-flannel-ds-7p8gz" Feb 9 09:46:39.626817 kubelet[1974]: I0209 09:46:39.626804 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66b93822-531c-4a44-8408-e31e04ff077c-kube-proxy\") pod \"kube-proxy-qf7bh\" (UID: \"66b93822-531c-4a44-8408-e31e04ff077c\") " pod="kube-system/kube-proxy-qf7bh" Feb 9 09:46:39.626940 kubelet[1974]: I0209 09:46:39.626826 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/702a4982-c3f9-482e-a2d6-627317f283fe-cni-plugin\") pod \"kube-flannel-ds-7p8gz\" (UID: \"702a4982-c3f9-482e-a2d6-627317f283fe\") " pod="kube-flannel/kube-flannel-ds-7p8gz" Feb 9 09:46:39.626940 kubelet[1974]: I0209 09:46:39.626845 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/702a4982-c3f9-482e-a2d6-627317f283fe-cni\") pod \"kube-flannel-ds-7p8gz\" (UID: \"702a4982-c3f9-482e-a2d6-627317f283fe\") " pod="kube-flannel/kube-flannel-ds-7p8gz" Feb 9 09:46:39.626940 kubelet[1974]: I0209 09:46:39.626863 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66b93822-531c-4a44-8408-e31e04ff077c-lib-modules\") pod \"kube-proxy-qf7bh\" (UID: \"66b93822-531c-4a44-8408-e31e04ff077c\") " pod="kube-system/kube-proxy-qf7bh" Feb 9 09:46:39.626940 kubelet[1974]: I0209 09:46:39.626893 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/702a4982-c3f9-482e-a2d6-627317f283fe-xtables-lock\") pod \"kube-flannel-ds-7p8gz\" (UID: \"702a4982-c3f9-482e-a2d6-627317f283fe\") " pod="kube-flannel/kube-flannel-ds-7p8gz" Feb 9 09:46:39.626940 kubelet[1974]: I0209 09:46:39.626918 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66b93822-531c-4a44-8408-e31e04ff077c-xtables-lock\") pod \"kube-proxy-qf7bh\" (UID: \"66b93822-531c-4a44-8408-e31e04ff077c\") " pod="kube-system/kube-proxy-qf7bh" Feb 9 09:46:39.632569 systemd[1]: Created slice kubepods-burstable-pod702a4982_c3f9_482e_a2d6_627317f283fe.slice. Feb 9 09:46:39.933820 kubelet[1974]: E0209 09:46:39.933789 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:39.934154 kubelet[1974]: E0209 09:46:39.934113 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:39.934413 env[1147]: time="2024-02-09T09:46:39.934348760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qf7bh,Uid:66b93822-531c-4a44-8408-e31e04ff077c,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:39.934882 env[1147]: time="2024-02-09T09:46:39.934845882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7p8gz,Uid:702a4982-c3f9-482e-a2d6-627317f283fe,Namespace:kube-flannel,Attempt:0,}" Feb 9 09:46:39.955368 env[1147]: time="2024-02-09T09:46:39.955297209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:39.955368 env[1147]: time="2024-02-09T09:46:39.955340249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:39.955368 env[1147]: time="2024-02-09T09:46:39.955351129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:39.955538 env[1147]: time="2024-02-09T09:46:39.955465450Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc pid=2068 runtime=io.containerd.runc.v2 Feb 9 09:46:39.956088 env[1147]: time="2024-02-09T09:46:39.955938732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:39.956088 env[1147]: time="2024-02-09T09:46:39.955970892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:39.956088 env[1147]: time="2024-02-09T09:46:39.955980732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:39.958892 env[1147]: time="2024-02-09T09:46:39.958669943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d2c6ff028b72643d47db5d02868c8ff1c362bbbe251493372f49b07d24b21ca pid=2067 runtime=io.containerd.runc.v2 Feb 9 09:46:39.969074 systemd[1]: Started cri-containerd-505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc.scope. Feb 9 09:46:39.972946 systemd[1]: Started cri-containerd-4d2c6ff028b72643d47db5d02868c8ff1c362bbbe251493372f49b07d24b21ca.scope. Feb 9 09:46:40.048723 env[1147]: time="2024-02-09T09:46:40.048674436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qf7bh,Uid:66b93822-531c-4a44-8408-e31e04ff077c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d2c6ff028b72643d47db5d02868c8ff1c362bbbe251493372f49b07d24b21ca\"" Feb 9 09:46:40.049528 kubelet[1974]: E0209 09:46:40.049329 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:40.052050 env[1147]: time="2024-02-09T09:46:40.052010810Z" level=info msg="CreateContainer within sandbox \"4d2c6ff028b72643d47db5d02868c8ff1c362bbbe251493372f49b07d24b21ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:46:40.067295 env[1147]: time="2024-02-09T09:46:40.067241952Z" level=info msg="CreateContainer within sandbox \"4d2c6ff028b72643d47db5d02868c8ff1c362bbbe251493372f49b07d24b21ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd407fd7103cec5fbd57c92507c116f6ba576d89fc7342e690f55403d00dc3e9\"" Feb 9 09:46:40.068120 env[1147]: time="2024-02-09T09:46:40.068091435Z" level=info msg="StartContainer for \"fd407fd7103cec5fbd57c92507c116f6ba576d89fc7342e690f55403d00dc3e9\"" Feb 9 09:46:40.068514 env[1147]: time="2024-02-09T09:46:40.068486837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7p8gz,Uid:702a4982-c3f9-482e-a2d6-627317f283fe,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc\"" Feb 9 09:46:40.069339 kubelet[1974]: E0209 09:46:40.069147 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:40.070042 env[1147]: time="2024-02-09T09:46:40.070015083Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 9 09:46:40.085297 systemd[1]: Started cri-containerd-fd407fd7103cec5fbd57c92507c116f6ba576d89fc7342e690f55403d00dc3e9.scope. Feb 9 09:46:40.131366 env[1147]: time="2024-02-09T09:46:40.131326212Z" level=info msg="StartContainer for \"fd407fd7103cec5fbd57c92507c116f6ba576d89fc7342e690f55403d00dc3e9\" returns successfully" Feb 9 09:46:40.794275 kubelet[1974]: E0209 09:46:40.794249 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:40.802463 kubelet[1974]: I0209 09:46:40.802436 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qf7bh" podStartSLOduration=1.802406856 podCreationTimestamp="2024-02-09 09:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:40.801968375 +0000 UTC m=+15.157379217" watchObservedRunningTime="2024-02-09 09:46:40.802406856 +0000 UTC m=+15.157817698" Feb 9 09:46:41.086322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734228448.mount: Deactivated successfully. Feb 9 09:46:41.123427 env[1147]: time="2024-02-09T09:46:41.123374778Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:41.125320 env[1147]: time="2024-02-09T09:46:41.125284825Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:41.130473 env[1147]: time="2024-02-09T09:46:41.130439205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:41.131553 env[1147]: time="2024-02-09T09:46:41.131521770Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:41.132296 env[1147]: time="2024-02-09T09:46:41.132264732Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 9 09:46:41.135729 env[1147]: time="2024-02-09T09:46:41.135698586Z" level=info msg="CreateContainer within sandbox \"505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 09:46:41.143965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978129714.mount: Deactivated successfully. Feb 9 09:46:41.147086 env[1147]: time="2024-02-09T09:46:41.147055030Z" level=info msg="CreateContainer within sandbox \"505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"55e81b9670fa2d0b6574e05e6b3405f8296c7e883bd9ab0bdd0b88ef561d713c\"" Feb 9 09:46:41.147634 env[1147]: time="2024-02-09T09:46:41.147608152Z" level=info msg="StartContainer for \"55e81b9670fa2d0b6574e05e6b3405f8296c7e883bd9ab0bdd0b88ef561d713c\"" Feb 9 09:46:41.161191 systemd[1]: Started cri-containerd-55e81b9670fa2d0b6574e05e6b3405f8296c7e883bd9ab0bdd0b88ef561d713c.scope. Feb 9 09:46:41.196063 systemd[1]: cri-containerd-55e81b9670fa2d0b6574e05e6b3405f8296c7e883bd9ab0bdd0b88ef561d713c.scope: Deactivated successfully. Feb 9 09:46:41.196735 env[1147]: time="2024-02-09T09:46:41.196690983Z" level=info msg="StartContainer for \"55e81b9670fa2d0b6574e05e6b3405f8296c7e883bd9ab0bdd0b88ef561d713c\" returns successfully" Feb 9 09:46:41.234836 env[1147]: time="2024-02-09T09:46:41.234790011Z" level=info msg="shim disconnected" id=55e81b9670fa2d0b6574e05e6b3405f8296c7e883bd9ab0bdd0b88ef561d713c Feb 9 09:46:41.235025 env[1147]: time="2024-02-09T09:46:41.234842131Z" level=warning msg="cleaning up after shim disconnected" id=55e81b9670fa2d0b6574e05e6b3405f8296c7e883bd9ab0bdd0b88ef561d713c namespace=k8s.io Feb 9 09:46:41.235025 env[1147]: time="2024-02-09T09:46:41.234852051Z" level=info msg="cleaning up dead shim" Feb 9 09:46:41.241484 env[1147]: time="2024-02-09T09:46:41.241449717Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2333 runtime=io.containerd.runc.v2\n" Feb 9 09:46:41.797217 kubelet[1974]: E0209 09:46:41.797191 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:41.800318 env[1147]: time="2024-02-09T09:46:41.799838206Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 9 09:46:42.861332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791359218.mount: Deactivated successfully. Feb 9 09:46:43.539075 env[1147]: time="2024-02-09T09:46:43.539032229Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:43.540521 env[1147]: time="2024-02-09T09:46:43.540486194Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:43.542370 env[1147]: time="2024-02-09T09:46:43.542337201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:43.544060 env[1147]: time="2024-02-09T09:46:43.544031967Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:43.544990 env[1147]: time="2024-02-09T09:46:43.544955130Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 9 09:46:43.546971 env[1147]: time="2024-02-09T09:46:43.546923417Z" level=info msg="CreateContainer within sandbox \"505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 09:46:43.556820 env[1147]: time="2024-02-09T09:46:43.556781893Z" level=info msg="CreateContainer within sandbox \"505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef61e196a1e2f955fd3595bf78e5e739008e7e493a8eda442ff3a0e59070108c\"" Feb 9 09:46:43.558061 env[1147]: time="2024-02-09T09:46:43.557208494Z" level=info msg="StartContainer for \"ef61e196a1e2f955fd3595bf78e5e739008e7e493a8eda442ff3a0e59070108c\"" Feb 9 09:46:43.571254 systemd[1]: Started cri-containerd-ef61e196a1e2f955fd3595bf78e5e739008e7e493a8eda442ff3a0e59070108c.scope. Feb 9 09:46:43.609129 env[1147]: time="2024-02-09T09:46:43.608397677Z" level=info msg="StartContainer for \"ef61e196a1e2f955fd3595bf78e5e739008e7e493a8eda442ff3a0e59070108c\" returns successfully" Feb 9 09:46:43.613943 systemd[1]: cri-containerd-ef61e196a1e2f955fd3595bf78e5e739008e7e493a8eda442ff3a0e59070108c.scope: Deactivated successfully. Feb 9 09:46:43.701218 kubelet[1974]: I0209 09:46:43.701185 1974 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:46:43.718986 kubelet[1974]: I0209 09:46:43.718921 1974 topology_manager.go:215] "Topology Admit Handler" podUID="8be7a93f-6611-4aef-bbb8-f1d9baf57ca7" podNamespace="kube-system" podName="coredns-5dd5756b68-h74nj" Feb 9 09:46:43.719313 kubelet[1974]: I0209 09:46:43.719241 1974 topology_manager.go:215] "Topology Admit Handler" podUID="1a9569fa-92f9-4b00-88ed-6e7c46be712b" podNamespace="kube-system" podName="coredns-5dd5756b68-zdccn" Feb 9 09:46:43.729360 systemd[1]: Created slice kubepods-burstable-pod8be7a93f_6611_4aef_bbb8_f1d9baf57ca7.slice. Feb 9 09:46:43.733497 systemd[1]: Created slice kubepods-burstable-pod1a9569fa_92f9_4b00_88ed_6e7c46be712b.slice. Feb 9 09:46:43.747717 env[1147]: time="2024-02-09T09:46:43.747665934Z" level=info msg="shim disconnected" id=ef61e196a1e2f955fd3595bf78e5e739008e7e493a8eda442ff3a0e59070108c Feb 9 09:46:43.747717 env[1147]: time="2024-02-09T09:46:43.747715414Z" level=warning msg="cleaning up after shim disconnected" id=ef61e196a1e2f955fd3595bf78e5e739008e7e493a8eda442ff3a0e59070108c namespace=k8s.io Feb 9 09:46:43.747844 env[1147]: time="2024-02-09T09:46:43.747725134Z" level=info msg="cleaning up dead shim" Feb 9 09:46:43.755192 kubelet[1974]: I0209 09:46:43.755158 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk86z\" (UniqueName: \"kubernetes.io/projected/8be7a93f-6611-4aef-bbb8-f1d9baf57ca7-kube-api-access-rk86z\") pod \"coredns-5dd5756b68-h74nj\" (UID: \"8be7a93f-6611-4aef-bbb8-f1d9baf57ca7\") " pod="kube-system/coredns-5dd5756b68-h74nj" Feb 9 09:46:43.755293 kubelet[1974]: I0209 09:46:43.755205 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a9569fa-92f9-4b00-88ed-6e7c46be712b-config-volume\") pod \"coredns-5dd5756b68-zdccn\" (UID: \"1a9569fa-92f9-4b00-88ed-6e7c46be712b\") " pod="kube-system/coredns-5dd5756b68-zdccn" Feb 9 09:46:43.755293 kubelet[1974]: I0209 09:46:43.755227 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8be7a93f-6611-4aef-bbb8-f1d9baf57ca7-config-volume\") pod \"coredns-5dd5756b68-h74nj\" (UID: \"8be7a93f-6611-4aef-bbb8-f1d9baf57ca7\") " pod="kube-system/coredns-5dd5756b68-h74nj" Feb 9 09:46:43.755293 kubelet[1974]: I0209 09:46:43.755249 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbqk\" (UniqueName: \"kubernetes.io/projected/1a9569fa-92f9-4b00-88ed-6e7c46be712b-kube-api-access-jxbqk\") pod \"coredns-5dd5756b68-zdccn\" (UID: \"1a9569fa-92f9-4b00-88ed-6e7c46be712b\") " pod="kube-system/coredns-5dd5756b68-zdccn" Feb 9 09:46:43.756020 env[1147]: time="2024-02-09T09:46:43.755969803Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:46:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2386 runtime=io.containerd.runc.v2\n" Feb 9 09:46:43.765559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef61e196a1e2f955fd3595bf78e5e739008e7e493a8eda442ff3a0e59070108c-rootfs.mount: Deactivated successfully. Feb 9 09:46:43.805570 kubelet[1974]: E0209 09:46:43.805484 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:43.808117 env[1147]: time="2024-02-09T09:46:43.807940509Z" level=info msg="CreateContainer within sandbox \"505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 09:46:43.822716 env[1147]: time="2024-02-09T09:46:43.822668481Z" level=info msg="CreateContainer within sandbox \"505dcc16bb965d38cda7201154f99e04bddd580b5fd28dbccddecd9f26be47cc\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"90792d2b1fd1837b6fc32c8c42acc55b70463a7ec3703d41045478806f862fc3\"" Feb 9 09:46:43.823628 env[1147]: time="2024-02-09T09:46:43.823234243Z" level=info msg="StartContainer for \"90792d2b1fd1837b6fc32c8c42acc55b70463a7ec3703d41045478806f862fc3\"" Feb 9 09:46:43.838156 systemd[1]: Started cri-containerd-90792d2b1fd1837b6fc32c8c42acc55b70463a7ec3703d41045478806f862fc3.scope. Feb 9 09:46:43.886535 env[1147]: time="2024-02-09T09:46:43.886473309Z" level=info msg="StartContainer for \"90792d2b1fd1837b6fc32c8c42acc55b70463a7ec3703d41045478806f862fc3\" returns successfully" Feb 9 09:46:44.033312 kubelet[1974]: E0209 09:46:44.033254 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:44.033860 env[1147]: time="2024-02-09T09:46:44.033822830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h74nj,Uid:8be7a93f-6611-4aef-bbb8-f1d9baf57ca7,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:44.035987 kubelet[1974]: E0209 09:46:44.035968 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:44.036388 env[1147]: time="2024-02-09T09:46:44.036353479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zdccn,Uid:1a9569fa-92f9-4b00-88ed-6e7c46be712b,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:44.063448 env[1147]: time="2024-02-09T09:46:44.063341411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zdccn,Uid:1a9569fa-92f9-4b00-88ed-6e7c46be712b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3eee57e4cb30665d276e0d085d85af9146337c988ec398c341dcf06701c07af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 09:46:44.065179 kubelet[1974]: E0209 09:46:44.064012 1974 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3eee57e4cb30665d276e0d085d85af9146337c988ec398c341dcf06701c07af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 09:46:44.065179 kubelet[1974]: E0209 09:46:44.064066 1974 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3eee57e4cb30665d276e0d085d85af9146337c988ec398c341dcf06701c07af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-zdccn" Feb 9 09:46:44.065179 kubelet[1974]: E0209 09:46:44.064087 1974 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3eee57e4cb30665d276e0d085d85af9146337c988ec398c341dcf06701c07af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-zdccn" Feb 9 09:46:44.065179 kubelet[1974]: E0209 09:46:44.064141 1974 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-zdccn_kube-system(1a9569fa-92f9-4b00-88ed-6e7c46be712b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-zdccn_kube-system(1a9569fa-92f9-4b00-88ed-6e7c46be712b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3eee57e4cb30665d276e0d085d85af9146337c988ec398c341dcf06701c07af\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5dd5756b68-zdccn" podUID="1a9569fa-92f9-4b00-88ed-6e7c46be712b" Feb 9 09:46:44.067168 env[1147]: time="2024-02-09T09:46:44.067126504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h74nj,Uid:8be7a93f-6611-4aef-bbb8-f1d9baf57ca7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7dcfeaa627d2f0dabc51864f4ace7bb44d1c977b0aeebfffa491f28c4a1e8411\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 09:46:44.067439 kubelet[1974]: E0209 09:46:44.067410 1974 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dcfeaa627d2f0dabc51864f4ace7bb44d1c977b0aeebfffa491f28c4a1e8411\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 09:46:44.067512 kubelet[1974]: E0209 09:46:44.067457 1974 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dcfeaa627d2f0dabc51864f4ace7bb44d1c977b0aeebfffa491f28c4a1e8411\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-h74nj" Feb 9 09:46:44.067512 kubelet[1974]: E0209 09:46:44.067473 1974 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dcfeaa627d2f0dabc51864f4ace7bb44d1c977b0aeebfffa491f28c4a1e8411\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-h74nj" Feb 9 09:46:44.067568 kubelet[1974]: E0209 09:46:44.067514 1974 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-h74nj_kube-system(8be7a93f-6611-4aef-bbb8-f1d9baf57ca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-h74nj_kube-system(8be7a93f-6611-4aef-bbb8-f1d9baf57ca7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dcfeaa627d2f0dabc51864f4ace7bb44d1c977b0aeebfffa491f28c4a1e8411\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5dd5756b68-h74nj" podUID="8be7a93f-6611-4aef-bbb8-f1d9baf57ca7" Feb 9 09:46:44.808892 kubelet[1974]: E0209 09:46:44.808842 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:44.820310 kubelet[1974]: I0209 09:46:44.820113 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-7p8gz" podStartSLOduration=2.344452833 podCreationTimestamp="2024-02-09 09:46:39 +0000 UTC" firstStartedPulling="2024-02-09 09:46:40.069545361 +0000 UTC m=+14.424956203" lastFinishedPulling="2024-02-09 09:46:43.545170531 +0000 UTC m=+17.900581413" observedRunningTime="2024-02-09 09:46:44.818577118 +0000 UTC m=+19.173988000" watchObservedRunningTime="2024-02-09 09:46:44.820078043 +0000 UTC m=+19.175488925" Feb 9 09:46:44.957121 systemd-networkd[1055]: flannel.1: Link UP Feb 9 09:46:44.957126 systemd-networkd[1055]: flannel.1: Gained carrier Feb 9 09:46:45.810244 kubelet[1974]: E0209 09:46:45.810216 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:46.285784 systemd-networkd[1055]: flannel.1: Gained IPv6LL Feb 9 09:46:52.449857 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:53158.service. Feb 9 09:46:52.484564 sshd[2605]: Accepted publickey for core from 10.0.0.1 port 53158 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:46:52.485715 sshd[2605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:52.488898 systemd-logind[1134]: New session 6 of user core. Feb 9 09:46:52.489674 systemd[1]: Started session-6.scope. Feb 9 09:46:52.602977 sshd[2605]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:52.605337 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:53158.service: Deactivated successfully. Feb 9 09:46:52.606072 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:46:52.606577 systemd-logind[1134]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:46:52.607141 systemd-logind[1134]: Removed session 6. Feb 9 09:46:55.766188 kubelet[1974]: E0209 09:46:55.766151 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:55.766595 env[1147]: time="2024-02-09T09:46:55.766516950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zdccn,Uid:1a9569fa-92f9-4b00-88ed-6e7c46be712b,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:55.784835 systemd-networkd[1055]: cni0: Link UP Feb 9 09:46:55.784840 systemd-networkd[1055]: cni0: Gained carrier Feb 9 09:46:55.785062 systemd-networkd[1055]: cni0: Lost carrier Feb 9 09:46:55.794159 systemd-networkd[1055]: vethc616f115: Link UP Feb 9 09:46:55.796802 kernel: cni0: port 1(vethc616f115) entered blocking state Feb 9 09:46:55.796875 kernel: cni0: port 1(vethc616f115) entered disabled state Feb 9 09:46:55.796897 kernel: device vethc616f115 entered promiscuous mode Feb 9 09:46:55.800127 kernel: cni0: port 1(vethc616f115) entered blocking state Feb 9 09:46:55.800185 kernel: cni0: port 1(vethc616f115) entered forwarding state Feb 9 09:46:55.800215 kernel: cni0: port 1(vethc616f115) entered disabled state Feb 9 09:46:55.809559 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc616f115: link becomes ready Feb 9 09:46:55.809635 kernel: cni0: port 1(vethc616f115) entered blocking state Feb 9 09:46:55.809671 kernel: cni0: port 1(vethc616f115) entered forwarding state Feb 9 09:46:55.809550 systemd-networkd[1055]: vethc616f115: Gained carrier Feb 9 09:46:55.809754 systemd-networkd[1055]: cni0: Gained carrier Feb 9 09:46:55.812447 env[1147]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a928), "name":"cbr0", "type":"bridge"} Feb 9 09:46:55.812447 env[1147]: delegateAdd: netconf sent to delegate plugin: Feb 9 09:46:55.822046 env[1147]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T09:46:55.821969600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:55.822178 env[1147]: time="2024-02-09T09:46:55.822016440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:55.822268 env[1147]: time="2024-02-09T09:46:55.822239320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:55.822548 env[1147]: time="2024-02-09T09:46:55.822515161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c86e7a5574a2e174e1ce7dc65f270a8f77fcaea196f9c87c10626213a4327e1 pid=2686 runtime=io.containerd.runc.v2 Feb 9 09:46:55.839579 systemd[1]: Started cri-containerd-0c86e7a5574a2e174e1ce7dc65f270a8f77fcaea196f9c87c10626213a4327e1.scope. Feb 9 09:46:55.867006 systemd-resolved[1091]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:46:55.883914 env[1147]: time="2024-02-09T09:46:55.883862104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zdccn,Uid:1a9569fa-92f9-4b00-88ed-6e7c46be712b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c86e7a5574a2e174e1ce7dc65f270a8f77fcaea196f9c87c10626213a4327e1\"" Feb 9 09:46:55.884575 kubelet[1974]: E0209 09:46:55.884553 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:55.886548 env[1147]: time="2024-02-09T09:46:55.886511750Z" level=info msg="CreateContainer within sandbox \"0c86e7a5574a2e174e1ce7dc65f270a8f77fcaea196f9c87c10626213a4327e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:46:55.899026 env[1147]: time="2024-02-09T09:46:55.898962979Z" level=info msg="CreateContainer within sandbox \"0c86e7a5574a2e174e1ce7dc65f270a8f77fcaea196f9c87c10626213a4327e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4aa96cc1417501acbbcb3ceaeee063f1fda4f7f1810f655c595266bc857244dd\"" Feb 9 09:46:55.899811 env[1147]: time="2024-02-09T09:46:55.899786341Z" level=info msg="StartContainer for \"4aa96cc1417501acbbcb3ceaeee063f1fda4f7f1810f655c595266bc857244dd\"" Feb 9 09:46:55.914572 systemd[1]: Started cri-containerd-4aa96cc1417501acbbcb3ceaeee063f1fda4f7f1810f655c595266bc857244dd.scope. Feb 9 09:46:55.970789 env[1147]: time="2024-02-09T09:46:55.970744866Z" level=info msg="StartContainer for \"4aa96cc1417501acbbcb3ceaeee063f1fda4f7f1810f655c595266bc857244dd\" returns successfully" Feb 9 09:46:56.832141 kubelet[1974]: E0209 09:46:56.832102 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:56.842347 kubelet[1974]: I0209 09:46:56.842317 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zdccn" podStartSLOduration=17.84228524 podCreationTimestamp="2024-02-09 09:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:56.84217072 +0000 UTC m=+31.197581602" watchObservedRunningTime="2024-02-09 09:46:56.84228524 +0000 UTC m=+31.197696122" Feb 9 09:46:57.101802 systemd-networkd[1055]: cni0: Gained IPv6LL Feb 9 09:46:57.549802 systemd-networkd[1055]: vethc616f115: Gained IPv6LL Feb 9 09:46:57.607560 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:37782.service. Feb 9 09:46:57.642353 sshd[2763]: Accepted publickey for core from 10.0.0.1 port 37782 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:46:57.643502 sshd[2763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:57.647102 systemd-logind[1134]: New session 7 of user core. Feb 9 09:46:57.647811 systemd[1]: Started session-7.scope. Feb 9 09:46:57.754038 sshd[2763]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:57.756504 systemd-logind[1134]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:46:57.756677 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:46:57.757194 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:37782.service: Deactivated successfully. Feb 9 09:46:57.758115 systemd-logind[1134]: Removed session 7. Feb 9 09:46:57.833891 kubelet[1974]: E0209 09:46:57.833800 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:58.835563 kubelet[1974]: E0209 09:46:58.835537 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:59.766138 kubelet[1974]: E0209 09:46:59.766102 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:59.766623 env[1147]: time="2024-02-09T09:46:59.766584499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h74nj,Uid:8be7a93f-6611-4aef-bbb8-f1d9baf57ca7,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:59.794938 systemd-networkd[1055]: vethe55b3c77: Link UP Feb 9 09:46:59.797133 kernel: cni0: port 2(vethe55b3c77) entered blocking state Feb 9 09:46:59.797208 kernel: cni0: port 2(vethe55b3c77) entered disabled state Feb 9 09:46:59.797740 kernel: device vethe55b3c77 entered promiscuous mode Feb 9 09:46:59.805165 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:46:59.805244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe55b3c77: link becomes ready Feb 9 09:46:59.805269 kernel: cni0: port 2(vethe55b3c77) entered blocking state Feb 9 09:46:59.805282 kernel: cni0: port 2(vethe55b3c77) entered forwarding state Feb 9 09:46:59.804602 systemd-networkd[1055]: vethe55b3c77: Gained carrier Feb 9 09:46:59.808100 env[1147]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} Feb 9 09:46:59.808100 env[1147]: delegateAdd: netconf sent to delegate plugin: Feb 9 09:46:59.826839 env[1147]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T09:46:59.826775305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:59.826839 env[1147]: time="2024-02-09T09:46:59.826815065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:59.826839 env[1147]: time="2024-02-09T09:46:59.826825345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:59.827006 env[1147]: time="2024-02-09T09:46:59.826959505Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8497dcfed8825fdf0bb352c20231335b562880fb5093585354a152f49bef626a pid=2824 runtime=io.containerd.runc.v2 Feb 9 09:46:59.842124 systemd[1]: Started cri-containerd-8497dcfed8825fdf0bb352c20231335b562880fb5093585354a152f49bef626a.scope. Feb 9 09:46:59.867575 systemd-resolved[1091]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:46:59.885476 env[1147]: time="2024-02-09T09:46:59.885437027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h74nj,Uid:8be7a93f-6611-4aef-bbb8-f1d9baf57ca7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8497dcfed8825fdf0bb352c20231335b562880fb5093585354a152f49bef626a\"" Feb 9 09:46:59.886294 kubelet[1974]: E0209 09:46:59.886274 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:46:59.888439 env[1147]: time="2024-02-09T09:46:59.888409833Z" level=info msg="CreateContainer within sandbox \"8497dcfed8825fdf0bb352c20231335b562880fb5093585354a152f49bef626a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:46:59.952010 env[1147]: time="2024-02-09T09:46:59.951957846Z" level=info msg="CreateContainer within sandbox \"8497dcfed8825fdf0bb352c20231335b562880fb5093585354a152f49bef626a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b993570e260a3cf9e7cff440d56250b132f899c4db05d4244e020823ea6fd46\"" Feb 9 09:46:59.952572 env[1147]: time="2024-02-09T09:46:59.952522287Z" level=info msg="StartContainer for \"3b993570e260a3cf9e7cff440d56250b132f899c4db05d4244e020823ea6fd46\"" Feb 9 09:46:59.966919 systemd[1]: Started cri-containerd-3b993570e260a3cf9e7cff440d56250b132f899c4db05d4244e020823ea6fd46.scope. Feb 9 09:47:00.016334 env[1147]: time="2024-02-09T09:47:00.016235460Z" level=info msg="StartContainer for \"3b993570e260a3cf9e7cff440d56250b132f899c4db05d4244e020823ea6fd46\" returns successfully" Feb 9 09:47:00.840622 kubelet[1974]: E0209 09:47:00.840595 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:00.858098 kubelet[1974]: I0209 09:47:00.858054 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h74nj" podStartSLOduration=21.858018334 podCreationTimestamp="2024-02-09 09:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:00.849397196 +0000 UTC m=+35.204808118" watchObservedRunningTime="2024-02-09 09:47:00.858018334 +0000 UTC m=+35.213429216" Feb 9 09:47:01.773771 systemd-networkd[1055]: vethe55b3c77: Gained IPv6LL Feb 9 09:47:01.842295 kubelet[1974]: E0209 09:47:01.842265 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:02.758988 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:57990.service. Feb 9 09:47:02.794064 sshd[2920]: Accepted publickey for core from 10.0.0.1 port 57990 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:02.795622 sshd[2920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:02.799405 systemd-logind[1134]: New session 8 of user core. Feb 9 09:47:02.799827 systemd[1]: Started session-8.scope. Feb 9 09:47:02.844601 kubelet[1974]: E0209 09:47:02.844578 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:47:02.907277 sshd[2920]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:02.910914 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:57996.service. Feb 9 09:47:02.912279 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:47:02.912881 systemd-logind[1134]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:47:02.913018 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:57990.service: Deactivated successfully. Feb 9 09:47:02.914082 systemd-logind[1134]: Removed session 8. Feb 9 09:47:02.946911 sshd[2933]: Accepted publickey for core from 10.0.0.1 port 57996 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:02.948226 sshd[2933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:02.951240 systemd-logind[1134]: New session 9 of user core. Feb 9 09:47:02.952151 systemd[1]: Started session-9.scope. Feb 9 09:47:03.158534 sshd[2933]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:03.161870 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:58006.service. Feb 9 09:47:03.176570 systemd-logind[1134]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:47:03.176974 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:57996.service: Deactivated successfully. Feb 9 09:47:03.177669 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:47:03.181751 systemd-logind[1134]: Removed session 9. Feb 9 09:47:03.203808 sshd[2944]: Accepted publickey for core from 10.0.0.1 port 58006 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:03.204875 sshd[2944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:03.207870 systemd-logind[1134]: New session 10 of user core. Feb 9 09:47:03.208729 systemd[1]: Started session-10.scope. Feb 9 09:47:03.315779 sshd[2944]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:03.318387 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:58006.service: Deactivated successfully. Feb 9 09:47:03.319096 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:47:03.319575 systemd-logind[1134]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:47:03.320256 systemd-logind[1134]: Removed session 10. Feb 9 09:47:08.321005 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:58008.service. Feb 9 09:47:08.355871 sshd[2979]: Accepted publickey for core from 10.0.0.1 port 58008 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:08.357423 sshd[2979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:08.360529 systemd-logind[1134]: New session 11 of user core. Feb 9 09:47:08.361394 systemd[1]: Started session-11.scope. Feb 9 09:47:08.470450 sshd[2979]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:08.473399 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:58008.service: Deactivated successfully. Feb 9 09:47:08.474031 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:47:08.474601 systemd-logind[1134]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:47:08.475585 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:58010.service. Feb 9 09:47:08.476285 systemd-logind[1134]: Removed session 11. Feb 9 09:47:08.510574 sshd[2993]: Accepted publickey for core from 10.0.0.1 port 58010 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:08.512002 sshd[2993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:08.515027 systemd-logind[1134]: New session 12 of user core. Feb 9 09:47:08.515736 systemd[1]: Started session-12.scope. Feb 9 09:47:08.691245 sshd[2993]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:08.695223 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:58026.service. Feb 9 09:47:08.695734 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:58010.service: Deactivated successfully. Feb 9 09:47:08.696543 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:47:08.697097 systemd-logind[1134]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:47:08.697902 systemd-logind[1134]: Removed session 12. Feb 9 09:47:08.730336 sshd[3004]: Accepted publickey for core from 10.0.0.1 port 58026 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:08.731536 sshd[3004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:08.734809 systemd-logind[1134]: New session 13 of user core. Feb 9 09:47:08.735690 systemd[1]: Started session-13.scope. Feb 9 09:47:09.462784 sshd[3004]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:09.465577 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:58026.service: Deactivated successfully. Feb 9 09:47:09.466321 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:47:09.466970 systemd-logind[1134]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:47:09.468301 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:58040.service. Feb 9 09:47:09.472008 systemd-logind[1134]: Removed session 13. Feb 9 09:47:09.511562 sshd[3024]: Accepted publickey for core from 10.0.0.1 port 58040 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:09.512795 sshd[3024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:09.516214 systemd-logind[1134]: New session 14 of user core. Feb 9 09:47:09.517127 systemd[1]: Started session-14.scope. Feb 9 09:47:09.763496 sshd[3024]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:09.766607 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:58054.service. Feb 9 09:47:09.770993 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:47:09.771864 systemd-logind[1134]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:47:09.771997 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:58040.service: Deactivated successfully. Feb 9 09:47:09.773312 systemd-logind[1134]: Removed session 14. Feb 9 09:47:09.802678 sshd[3036]: Accepted publickey for core from 10.0.0.1 port 58054 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:09.803783 sshd[3036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:09.807517 systemd-logind[1134]: New session 15 of user core. Feb 9 09:47:09.808046 systemd[1]: Started session-15.scope. Feb 9 09:47:09.916598 sshd[3036]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:09.919285 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:58054.service: Deactivated successfully. Feb 9 09:47:09.920136 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:47:09.920724 systemd-logind[1134]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:47:09.921513 systemd-logind[1134]: Removed session 15. Feb 9 09:47:14.921381 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:40990.service. Feb 9 09:47:14.959996 sshd[3077]: Accepted publickey for core from 10.0.0.1 port 40990 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:14.961486 sshd[3077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:14.965321 systemd-logind[1134]: New session 16 of user core. Feb 9 09:47:14.965683 systemd[1]: Started session-16.scope. Feb 9 09:47:15.093177 sshd[3077]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:15.095561 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:40990.service: Deactivated successfully. Feb 9 09:47:15.096419 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:47:15.097101 systemd-logind[1134]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:47:15.097896 systemd-logind[1134]: Removed session 16. Feb 9 09:47:20.097319 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:40994.service. Feb 9 09:47:20.133118 sshd[3120]: Accepted publickey for core from 10.0.0.1 port 40994 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:20.134381 sshd[3120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:20.139367 systemd-logind[1134]: New session 17 of user core. Feb 9 09:47:20.139949 systemd[1]: Started session-17.scope. Feb 9 09:47:20.266950 sshd[3120]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:20.269375 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:40994.service: Deactivated successfully. Feb 9 09:47:20.270238 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:47:20.270917 systemd-logind[1134]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:47:20.272032 systemd-logind[1134]: Removed session 17. Feb 9 09:47:25.271682 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:41474.service. Feb 9 09:47:25.310241 sshd[3169]: Accepted publickey for core from 10.0.0.1 port 41474 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:25.311817 sshd[3169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:25.315925 systemd-logind[1134]: New session 18 of user core. Feb 9 09:47:25.316337 systemd[1]: Started session-18.scope. Feb 9 09:47:25.440195 sshd[3169]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:25.444328 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:41474.service: Deactivated successfully. Feb 9 09:47:25.445247 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:47:25.446074 systemd-logind[1134]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:47:25.447140 systemd-logind[1134]: Removed session 18. Feb 9 09:47:30.444409 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:41482.service. Feb 9 09:47:30.504253 sshd[3207]: Accepted publickey for core from 10.0.0.1 port 41482 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:47:30.505423 sshd[3207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:30.508859 systemd-logind[1134]: New session 19 of user core. Feb 9 09:47:30.509777 systemd[1]: Started session-19.scope. Feb 9 09:47:30.634635 sshd[3207]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:30.637388 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:41482.service: Deactivated successfully. Feb 9 09:47:30.638251 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:47:30.638796 systemd-logind[1134]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:47:30.639406 systemd-logind[1134]: Removed session 19.