Feb 9 18:40:11.721660 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:40:11.721752 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:40:11.721761 kernel: efi: EFI v2.70 by EDK II Feb 9 18:40:11.721767 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 18:40:11.721772 kernel: random: crng init done Feb 9 18:40:11.721778 kernel: ACPI: Early table checksum verification disabled Feb 9 18:40:11.721784 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 18:40:11.721792 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 18:40:11.721798 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721803 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721809 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721815 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721820 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721826 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721835 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721841 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721847 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:40:11.721853 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 18:40:11.721859 kernel: NUMA: Failed to initialise from firmware Feb 9 18:40:11.721865 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:40:11.721871 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Feb 9 18:40:11.721877 kernel: Zone ranges: Feb 9 18:40:11.721883 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:40:11.721890 kernel: DMA32 empty Feb 9 18:40:11.721896 kernel: Normal empty Feb 9 18:40:11.721901 kernel: Movable zone start for each node Feb 9 18:40:11.721907 kernel: Early memory node ranges Feb 9 18:40:11.721913 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 18:40:11.721924 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 18:40:11.721931 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 18:40:11.721937 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 18:40:11.721943 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 18:40:11.721949 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 18:40:11.721955 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 18:40:11.721961 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:40:11.721968 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 18:40:11.721974 kernel: psci: probing for conduit method from ACPI. Feb 9 18:40:11.721980 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:40:11.721986 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:40:11.721992 kernel: psci: Trusted OS migration not required Feb 9 18:40:11.722001 kernel: psci: SMC Calling Convention v1.1 Feb 9 18:40:11.722008 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 18:40:11.722016 kernel: ACPI: SRAT not present Feb 9 18:40:11.722022 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:40:11.722029 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:40:11.722036 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 18:40:11.722042 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:40:11.722048 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:40:11.722054 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:40:11.722061 kernel: CPU features: detected: Spectre-v4 Feb 9 18:40:11.722067 kernel: CPU features: detected: Spectre-BHB Feb 9 18:40:11.722074 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:40:11.722081 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:40:11.722088 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:40:11.722094 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 18:40:11.722100 kernel: Policy zone: DMA Feb 9 18:40:11.722108 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:40:11.722114 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:40:11.722121 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:40:11.722127 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:40:11.722134 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:40:11.722140 kernel: Memory: 2459148K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113140K reserved, 0K cma-reserved) Feb 9 18:40:11.722148 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:40:11.722155 kernel: trace event string verifier disabled Feb 9 18:40:11.722161 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:40:11.722168 kernel: rcu: RCU event tracing is enabled. Feb 9 18:40:11.722174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:40:11.722181 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:40:11.722187 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:40:11.722194 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:40:11.722200 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:40:11.722207 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:40:11.722213 kernel: GICv3: 256 SPIs implemented Feb 9 18:40:11.722221 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:40:11.722239 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:40:11.722245 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:40:11.722251 kernel: GICv3: 16 PPIs implemented Feb 9 18:40:11.722258 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 18:40:11.722264 kernel: ACPI: SRAT not present Feb 9 18:40:11.722270 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 18:40:11.722277 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 18:40:11.722283 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 18:40:11.722290 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 18:40:11.722296 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 18:40:11.722302 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:40:11.722311 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:40:11.722317 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:40:11.722324 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:40:11.722330 kernel: arm-pv: using stolen time PV Feb 9 18:40:11.722337 kernel: Console: colour dummy device 80x25 Feb 9 18:40:11.722343 kernel: ACPI: Core revision 20210730 Feb 9 18:40:11.722350 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:40:11.722357 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:40:11.722363 kernel: LSM: Security Framework initializing Feb 9 18:40:11.722370 kernel: SELinux: Initializing. Feb 9 18:40:11.722378 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:40:11.722384 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:40:11.722391 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:40:11.722397 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 18:40:11.722404 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 18:40:11.722410 kernel: Remapping and enabling EFI services. Feb 9 18:40:11.722417 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:40:11.722423 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:40:11.722430 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 18:40:11.722439 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 18:40:11.722445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:40:11.722452 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:40:11.722458 kernel: Detected PIPT I-cache on CPU2 Feb 9 18:40:11.722465 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 18:40:11.722472 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 18:40:11.722478 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:40:11.722485 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 18:40:11.722491 kernel: Detected PIPT I-cache on CPU3 Feb 9 18:40:11.722498 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 18:40:11.722506 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 18:40:11.722512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:40:11.722519 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 18:40:11.722525 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:40:11.722536 kernel: SMP: Total of 4 processors activated. Feb 9 18:40:11.722544 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:40:11.722551 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:40:11.722558 kernel: CPU features: detected: Common not Private translations Feb 9 18:40:11.722565 kernel: CPU features: detected: CRC32 instructions Feb 9 18:40:11.722572 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:40:11.722579 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:40:11.722586 kernel: CPU features: detected: Privileged Access Never Feb 9 18:40:11.722595 kernel: CPU features: detected: RAS Extension Support Feb 9 18:40:11.722602 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 18:40:11.722609 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:40:11.722616 kernel: alternatives: patching kernel code Feb 9 18:40:11.722624 kernel: devtmpfs: initialized Feb 9 18:40:11.722631 kernel: KASLR enabled Feb 9 18:40:11.722638 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:40:11.722645 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:40:11.722652 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:40:11.722659 kernel: SMBIOS 3.0.0 present. Feb 9 18:40:11.722666 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 18:40:11.722673 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:40:11.722680 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:40:11.722687 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:40:11.722695 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:40:11.722702 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:40:11.722709 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Feb 9 18:40:11.722716 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:40:11.722723 kernel: cpuidle: using governor menu Feb 9 18:40:11.722730 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:40:11.722737 kernel: ASID allocator initialised with 32768 entries Feb 9 18:40:11.722744 kernel: ACPI: bus type PCI registered Feb 9 18:40:11.722751 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:40:11.722759 kernel: Serial: AMBA PL011 UART driver Feb 9 18:40:11.722766 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:40:11.722773 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:40:11.722780 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:40:11.722787 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:40:11.722793 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:40:11.722800 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:40:11.722808 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:40:11.722815 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:40:11.722823 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:40:11.722830 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:40:11.722837 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:40:11.722844 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:40:11.722851 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:40:11.722858 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:40:11.722865 kernel: ACPI: Interpreter enabled Feb 9 18:40:11.722871 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:40:11.722878 kernel: ACPI: MCFG table detected, 1 entries Feb 9 18:40:11.722886 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:40:11.722893 kernel: printk: console [ttyAMA0] enabled Feb 9 18:40:11.722900 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:40:11.723089 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:40:11.723161 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 18:40:11.723235 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 18:40:11.723300 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 18:40:11.723363 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 18:40:11.723373 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 18:40:11.723380 kernel: PCI host bridge to bus 0000:00 Feb 9 18:40:11.723450 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 18:40:11.723506 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 18:40:11.723561 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 18:40:11.723615 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:40:11.723693 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 18:40:11.723773 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:40:11.723836 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 18:40:11.723898 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 18:40:11.723970 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:40:11.724033 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:40:11.724095 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 18:40:11.724160 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 18:40:11.724215 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 18:40:11.724281 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 18:40:11.724338 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 18:40:11.724347 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 18:40:11.724354 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 18:40:11.724361 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 18:40:11.724371 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 18:40:11.724378 kernel: iommu: Default domain type: Translated Feb 9 18:40:11.724385 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:40:11.724392 kernel: vgaarb: loaded Feb 9 18:40:11.724398 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:40:11.724405 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:40:11.724413 kernel: PTP clock support registered Feb 9 18:40:11.724419 kernel: Registered efivars operations Feb 9 18:40:11.724427 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:40:11.724434 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:40:11.724442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:40:11.724449 kernel: pnp: PnP ACPI init Feb 9 18:40:11.724521 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 18:40:11.724531 kernel: pnp: PnP ACPI: found 1 devices Feb 9 18:40:11.724538 kernel: NET: Registered PF_INET protocol family Feb 9 18:40:11.724545 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:40:11.724552 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:40:11.724559 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:40:11.724568 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:40:11.724575 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:40:11.724582 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:40:11.724589 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:40:11.724596 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:40:11.724603 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:40:11.724610 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:40:11.724617 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 18:40:11.724625 kernel: kvm [1]: HYP mode not available Feb 9 18:40:11.724632 kernel: Initialise system trusted keyrings Feb 9 18:40:11.724639 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:40:11.724646 kernel: Key type asymmetric registered Feb 9 18:40:11.724653 kernel: Asymmetric key parser 'x509' registered Feb 9 18:40:11.724660 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:40:11.724667 kernel: io scheduler mq-deadline registered Feb 9 18:40:11.724674 kernel: io scheduler kyber registered Feb 9 18:40:11.724680 kernel: io scheduler bfq registered Feb 9 18:40:11.724687 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 18:40:11.724696 kernel: ACPI: button: Power Button [PWRB] Feb 9 18:40:11.724703 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 18:40:11.724765 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 18:40:11.724774 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:40:11.724781 kernel: thunder_xcv, ver 1.0 Feb 9 18:40:11.724788 kernel: thunder_bgx, ver 1.0 Feb 9 18:40:11.724794 kernel: nicpf, ver 1.0 Feb 9 18:40:11.724801 kernel: nicvf, ver 1.0 Feb 9 18:40:11.724900 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:40:11.724970 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:40:11 UTC (1707504011) Feb 9 18:40:11.724981 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:40:11.724989 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:40:11.724995 kernel: Segment Routing with IPv6 Feb 9 18:40:11.725002 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:40:11.725009 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:40:11.725016 kernel: Key type dns_resolver registered Feb 9 18:40:11.725023 kernel: registered taskstats version 1 Feb 9 18:40:11.725032 kernel: Loading compiled-in X.509 certificates Feb 9 18:40:11.725039 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:40:11.725046 kernel: Key type .fscrypt registered Feb 9 18:40:11.725053 kernel: Key type fscrypt-provisioning registered Feb 9 18:40:11.725060 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:40:11.725067 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:40:11.725074 kernel: ima: No architecture policies found Feb 9 18:40:11.725081 kernel: Freeing unused kernel memory: 34688K Feb 9 18:40:11.725088 kernel: Run /init as init process Feb 9 18:40:11.725096 kernel: with arguments: Feb 9 18:40:11.725103 kernel: /init Feb 9 18:40:11.725109 kernel: with environment: Feb 9 18:40:11.725116 kernel: HOME=/ Feb 9 18:40:11.725123 kernel: TERM=linux Feb 9 18:40:11.725129 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:40:11.725138 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:40:11.725147 systemd[1]: Detected virtualization kvm. Feb 9 18:40:11.725157 systemd[1]: Detected architecture arm64. Feb 9 18:40:11.725164 systemd[1]: Running in initrd. Feb 9 18:40:11.725171 systemd[1]: No hostname configured, using default hostname. Feb 9 18:40:11.725178 systemd[1]: Hostname set to . Feb 9 18:40:11.725186 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:40:11.725193 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:40:11.725201 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:40:11.725208 systemd[1]: Reached target cryptsetup.target. Feb 9 18:40:11.725217 systemd[1]: Reached target paths.target. Feb 9 18:40:11.725272 systemd[1]: Reached target slices.target. Feb 9 18:40:11.725281 systemd[1]: Reached target swap.target. Feb 9 18:40:11.725288 systemd[1]: Reached target timers.target. Feb 9 18:40:11.725296 systemd[1]: Listening on iscsid.socket. Feb 9 18:40:11.725304 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:40:11.725311 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:40:11.725321 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:40:11.725329 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:40:11.725336 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:40:11.725344 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:40:11.725351 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:40:11.725359 systemd[1]: Reached target sockets.target. Feb 9 18:40:11.725366 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:40:11.725374 systemd[1]: Finished network-cleanup.service. Feb 9 18:40:11.725381 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:40:11.725390 systemd[1]: Starting systemd-journald.service... Feb 9 18:40:11.725397 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:40:11.725405 systemd[1]: Starting systemd-resolved.service... Feb 9 18:40:11.725412 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:40:11.725420 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:40:11.725427 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:40:11.725434 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:40:11.725442 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:40:11.725450 kernel: audit: type=1130 audit(1707504011.722:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.725459 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:40:11.725470 systemd-journald[290]: Journal started Feb 9 18:40:11.725515 systemd-journald[290]: Runtime Journal (/run/log/journal/0130407c83ab480990ea691c641d67fa) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:40:11.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.717325 systemd-modules-load[291]: Inserted module 'overlay' Feb 9 18:40:11.728999 kernel: audit: type=1130 audit(1707504011.726:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.729024 systemd[1]: Started systemd-journald.service. Feb 9 18:40:11.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.730238 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:40:11.730264 kernel: audit: type=1130 audit(1707504011.729:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.733234 kernel: Bridge firewalling registered Feb 9 18:40:11.733607 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 9 18:40:11.734106 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:40:11.749503 kernel: SCSI subsystem initialized Feb 9 18:40:11.750060 systemd-resolved[292]: Positive Trust Anchors: Feb 9 18:40:11.750075 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:40:11.750102 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:40:11.760008 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:40:11.760038 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:40:11.760048 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:40:11.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.754292 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 9 18:40:11.762373 kernel: audit: type=1130 audit(1707504011.759:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.755079 systemd[1]: Started systemd-resolved.service. Feb 9 18:40:11.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.765259 kernel: audit: type=1130 audit(1707504011.762:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.761970 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:40:11.763572 systemd[1]: Reached target nss-lookup.target. Feb 9 18:40:11.765183 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 9 18:40:11.766692 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:40:11.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.767854 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:40:11.773279 kernel: audit: type=1130 audit(1707504011.768:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.771329 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:40:11.778126 dracut-cmdline[311]: dracut-dracut-053 Feb 9 18:40:11.778537 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:40:11.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.781988 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:40:11.785389 kernel: audit: type=1130 audit(1707504011.779:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.836250 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:40:11.844253 kernel: iscsi: registered transport (tcp) Feb 9 18:40:11.857493 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:40:11.857540 kernel: QLogic iSCSI HBA Driver Feb 9 18:40:11.892625 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:40:11.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.894168 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:40:11.897052 kernel: audit: type=1130 audit(1707504011.893:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:11.941261 kernel: raid6: neonx8 gen() 13766 MB/s Feb 9 18:40:11.958246 kernel: raid6: neonx8 xor() 10786 MB/s Feb 9 18:40:11.975236 kernel: raid6: neonx4 gen() 13561 MB/s Feb 9 18:40:11.992235 kernel: raid6: neonx4 xor() 10981 MB/s Feb 9 18:40:12.009237 kernel: raid6: neonx2 gen() 13086 MB/s Feb 9 18:40:12.026243 kernel: raid6: neonx2 xor() 10250 MB/s Feb 9 18:40:12.043253 kernel: raid6: neonx1 gen() 10489 MB/s Feb 9 18:40:12.060243 kernel: raid6: neonx1 xor() 8758 MB/s Feb 9 18:40:12.077271 kernel: raid6: int64x8 gen() 6287 MB/s Feb 9 18:40:12.094245 kernel: raid6: int64x8 xor() 3544 MB/s Feb 9 18:40:12.111246 kernel: raid6: int64x4 gen() 7233 MB/s Feb 9 18:40:12.128253 kernel: raid6: int64x4 xor() 3849 MB/s Feb 9 18:40:12.145252 kernel: raid6: int64x2 gen() 6149 MB/s Feb 9 18:40:12.162251 kernel: raid6: int64x2 xor() 3320 MB/s Feb 9 18:40:12.179270 kernel: raid6: int64x1 gen() 5039 MB/s Feb 9 18:40:12.196543 kernel: raid6: int64x1 xor() 2644 MB/s Feb 9 18:40:12.196584 kernel: raid6: using algorithm neonx8 gen() 13766 MB/s Feb 9 18:40:12.196594 kernel: raid6: .... xor() 10786 MB/s, rmw enabled Feb 9 18:40:12.196606 kernel: raid6: using neon recovery algorithm Feb 9 18:40:12.207252 kernel: xor: measuring software checksum speed Feb 9 18:40:12.208241 kernel: 8regs : 17286 MB/sec Feb 9 18:40:12.210642 kernel: 32regs : 20760 MB/sec Feb 9 18:40:12.212249 kernel: arm64_neon : 27892 MB/sec Feb 9 18:40:12.212269 kernel: xor: using function: arm64_neon (27892 MB/sec) Feb 9 18:40:12.267257 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:40:12.277349 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:40:12.280340 kernel: audit: type=1130 audit(1707504012.277:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:12.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:12.278000 audit: BPF prog-id=7 op=LOAD Feb 9 18:40:12.279000 audit: BPF prog-id=8 op=LOAD Feb 9 18:40:12.280651 systemd[1]: Starting systemd-udevd.service... Feb 9 18:40:12.296081 systemd-udevd[491]: Using default interface naming scheme 'v252'. Feb 9 18:40:12.299508 systemd[1]: Started systemd-udevd.service. Feb 9 18:40:12.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:12.302800 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:40:12.314093 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation Feb 9 18:40:12.340602 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:40:12.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:12.342120 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:40:12.377316 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:40:12.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:12.406249 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:40:12.412283 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:40:12.412314 kernel: GPT:9289727 != 19775487 Feb 9 18:40:12.412324 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:40:12.413473 kernel: GPT:9289727 != 19775487 Feb 9 18:40:12.413488 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:40:12.413497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:40:12.426001 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:40:12.429065 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (536) Feb 9 18:40:12.427833 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:40:12.434129 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:40:12.440908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:40:12.444058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:40:12.445595 systemd[1]: Starting disk-uuid.service... Feb 9 18:40:12.451128 disk-uuid[559]: Primary Header is updated. Feb 9 18:40:12.451128 disk-uuid[559]: Secondary Entries is updated. Feb 9 18:40:12.451128 disk-uuid[559]: Secondary Header is updated. Feb 9 18:40:12.454254 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:40:13.465251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:40:13.465302 disk-uuid[560]: The operation has completed successfully. Feb 9 18:40:13.491611 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:40:13.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.491707 systemd[1]: Finished disk-uuid.service. Feb 9 18:40:13.493347 systemd[1]: Starting verity-setup.service... Feb 9 18:40:13.506954 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:40:13.527838 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:40:13.530048 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:40:13.531764 systemd[1]: Finished verity-setup.service. Feb 9 18:40:13.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.579261 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:40:13.580760 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:40:13.581554 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:40:13.582301 systemd[1]: Starting ignition-setup.service... Feb 9 18:40:13.584046 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:40:13.592382 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:40:13.592425 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:40:13.592436 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:40:13.600481 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:40:13.606529 systemd[1]: Finished ignition-setup.service. Feb 9 18:40:13.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.607985 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:40:13.677685 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:40:13.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.679000 audit: BPF prog-id=9 op=LOAD Feb 9 18:40:13.679877 systemd[1]: Starting systemd-networkd.service... Feb 9 18:40:13.685819 ignition[651]: Ignition 2.14.0 Feb 9 18:40:13.685830 ignition[651]: Stage: fetch-offline Feb 9 18:40:13.685870 ignition[651]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:40:13.685879 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:40:13.686032 ignition[651]: parsed url from cmdline: "" Feb 9 18:40:13.686036 ignition[651]: no config URL provided Feb 9 18:40:13.686041 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:40:13.686048 ignition[651]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:40:13.686067 ignition[651]: op(1): [started] loading QEMU firmware config module Feb 9 18:40:13.686072 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:40:13.695185 ignition[651]: op(1): [finished] loading QEMU firmware config module Feb 9 18:40:13.707207 systemd-networkd[737]: lo: Link UP Feb 9 18:40:13.707217 systemd-networkd[737]: lo: Gained carrier Feb 9 18:40:13.707813 systemd-networkd[737]: Enumeration completed Feb 9 18:40:13.707900 systemd[1]: Started systemd-networkd.service. Feb 9 18:40:13.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.708189 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:40:13.709320 systemd[1]: Reached target network.target. Feb 9 18:40:13.709549 systemd-networkd[737]: eth0: Link UP Feb 9 18:40:13.709553 systemd-networkd[737]: eth0: Gained carrier Feb 9 18:40:13.711155 systemd[1]: Starting iscsiuio.service... Feb 9 18:40:13.720102 systemd[1]: Started iscsiuio.service. Feb 9 18:40:13.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.721644 systemd[1]: Starting iscsid.service... Feb 9 18:40:13.725305 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:40:13.725305 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:40:13.725305 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:40:13.725305 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:40:13.725305 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:40:13.725305 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:40:13.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.728216 systemd[1]: Started iscsid.service. Feb 9 18:40:13.732265 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:40:13.736476 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:40:13.742686 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:40:13.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.743546 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:40:13.744727 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:40:13.746050 systemd[1]: Reached target remote-fs.target. Feb 9 18:40:13.748111 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:40:13.756052 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:40:13.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.761057 ignition[651]: parsing config with SHA512: 6f88729f26b3115c494bf39a28a7e16b9248e4a5dfc7bb14adaa69b322153110bbe7e23b83f0385f13be20d86e1ee2a86546dac00b98cd9d1b0bf4286d7a85e5 Feb 9 18:40:13.793753 unknown[651]: fetched base config from "system" Feb 9 18:40:13.794009 unknown[651]: fetched user config from "qemu" Feb 9 18:40:13.794579 ignition[651]: fetch-offline: fetch-offline passed Feb 9 18:40:13.794644 ignition[651]: Ignition finished successfully Feb 9 18:40:13.795705 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:40:13.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.796896 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:40:13.797672 systemd[1]: Starting ignition-kargs.service... Feb 9 18:40:13.806380 ignition[758]: Ignition 2.14.0 Feb 9 18:40:13.806389 ignition[758]: Stage: kargs Feb 9 18:40:13.806478 ignition[758]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:40:13.806488 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:40:13.807536 ignition[758]: kargs: kargs passed Feb 9 18:40:13.807580 ignition[758]: Ignition finished successfully Feb 9 18:40:13.810424 systemd[1]: Finished ignition-kargs.service. Feb 9 18:40:13.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.811843 systemd[1]: Starting ignition-disks.service... Feb 9 18:40:13.818368 ignition[764]: Ignition 2.14.0 Feb 9 18:40:13.818383 ignition[764]: Stage: disks Feb 9 18:40:13.818472 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:40:13.818482 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:40:13.820767 systemd[1]: Finished ignition-disks.service. Feb 9 18:40:13.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.819528 ignition[764]: disks: disks passed Feb 9 18:40:13.821986 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:40:13.819570 ignition[764]: Ignition finished successfully Feb 9 18:40:13.822876 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:40:13.823779 systemd[1]: Reached target local-fs.target. Feb 9 18:40:13.824769 systemd[1]: Reached target sysinit.target. Feb 9 18:40:13.825725 systemd[1]: Reached target basic.target. Feb 9 18:40:13.827603 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:40:13.838009 systemd-fsck[772]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 18:40:13.840929 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:40:13.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.842548 systemd[1]: Mounting sysroot.mount... Feb 9 18:40:13.847239 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:40:13.847452 systemd[1]: Mounted sysroot.mount. Feb 9 18:40:13.848017 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:40:13.850162 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:40:13.851013 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:40:13.851050 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:40:13.851073 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:40:13.852755 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:40:13.854069 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:40:13.858363 initrd-setup-root[782]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:40:13.862929 initrd-setup-root[790]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:40:13.866820 initrd-setup-root[798]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:40:13.870363 initrd-setup-root[806]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:40:13.894993 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:40:13.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.896508 systemd[1]: Starting ignition-mount.service... Feb 9 18:40:13.897762 systemd[1]: Starting sysroot-boot.service... Feb 9 18:40:13.902468 bash[823]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:40:13.910510 ignition[825]: INFO : Ignition 2.14.0 Feb 9 18:40:13.910510 ignition[825]: INFO : Stage: mount Feb 9 18:40:13.911643 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:40:13.911643 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:40:13.913320 ignition[825]: INFO : mount: mount passed Feb 9 18:40:13.913320 ignition[825]: INFO : Ignition finished successfully Feb 9 18:40:13.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:13.913177 systemd[1]: Finished ignition-mount.service. Feb 9 18:40:13.915648 systemd[1]: Finished sysroot-boot.service. Feb 9 18:40:13.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:14.538352 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:40:14.544248 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (833) Feb 9 18:40:14.545393 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:40:14.545410 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:40:14.545425 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:40:14.548560 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:40:14.550017 systemd[1]: Starting ignition-files.service... Feb 9 18:40:14.563606 ignition[853]: INFO : Ignition 2.14.0 Feb 9 18:40:14.563606 ignition[853]: INFO : Stage: files Feb 9 18:40:14.564735 ignition[853]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:40:14.564735 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:40:14.566299 ignition[853]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:40:14.569274 ignition[853]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:40:14.569274 ignition[853]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:40:14.572693 ignition[853]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:40:14.573679 ignition[853]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:40:14.573679 ignition[853]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:40:14.573359 unknown[853]: wrote ssh authorized keys file for user: core Feb 9 18:40:14.576444 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:40:14.576444 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:40:14.906654 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:40:15.095662 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:40:15.097799 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:40:15.097799 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:40:15.097799 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:40:15.323096 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:40:15.444254 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:40:15.446269 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:40:15.446269 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:40:15.446269 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:40:15.587611 systemd-networkd[737]: eth0: Gained IPv6LL Feb 9 18:40:15.687588 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:40:15.725944 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:40:15.727532 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:40:15.727532 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:40:15.774610 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:40:16.028830 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:40:16.028830 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:40:16.032497 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:40:16.032497 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:40:16.051400 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:40:16.739628 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:40:16.741952 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:40:16.741952 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:40:16.741952 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:40:16.763400 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:40:17.027518 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 18:40:17.027518 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:40:17.027518 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:40:17.033237 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:40:17.033237 ignition[853]: INFO : files: op(f): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:40:17.033237 ignition[853]: INFO : files: op(f): op(10): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:40:17.033237 ignition[853]: INFO : files: op(f): op(10): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:40:17.033237 ignition[853]: INFO : files: op(f): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:40:17.033237 ignition[853]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Feb 9 18:40:17.033237 ignition[853]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(13): [started] processing unit "prepare-helm.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(13): op(14): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(13): op(14): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(13): [finished] processing unit "prepare-helm.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(15): [started] processing unit "coreos-metadata.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(15): op(16): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(15): op(16): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(15): [finished] processing unit "coreos-metadata.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:40:17.056601 ignition[853]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:40:17.086776 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:40:17.086803 kernel: audit: type=1130 audit(1707504017.072:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.086815 kernel: audit: type=1130 audit(1707504017.081:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.086825 kernel: audit: type=1131 audit(1707504017.081:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.070916 systemd[1]: Finished ignition-files.service. Feb 9 18:40:17.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.090253 ignition[853]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:40:17.090253 ignition[853]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:40:17.090253 ignition[853]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:40:17.090253 ignition[853]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:40:17.090253 ignition[853]: INFO : files: files passed Feb 9 18:40:17.090253 ignition[853]: INFO : Ignition finished successfully Feb 9 18:40:17.097055 kernel: audit: type=1130 audit(1707504017.087:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.073729 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:40:17.074552 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:40:17.099306 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:40:17.075178 systemd[1]: Starting ignition-quench.service... Feb 9 18:40:17.101015 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:40:17.081073 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:40:17.081163 systemd[1]: Finished ignition-quench.service. Feb 9 18:40:17.083681 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:40:17.087652 systemd[1]: Reached target ignition-complete.target. Feb 9 18:40:17.109263 kernel: audit: type=1130 audit(1707504017.104:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.109282 kernel: audit: type=1131 audit(1707504017.104:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.091571 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:40:17.103799 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:40:17.103888 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:40:17.105425 systemd[1]: Reached target initrd-fs.target. Feb 9 18:40:17.109880 systemd[1]: Reached target initrd.target. Feb 9 18:40:17.111158 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:40:17.111860 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:40:17.122024 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:40:17.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.123481 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:40:17.126141 kernel: audit: type=1130 audit(1707504017.122:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.131559 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:40:17.132375 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:40:17.133567 systemd[1]: Stopped target timers.target. Feb 9 18:40:17.134702 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:40:17.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.138234 kernel: audit: type=1131 audit(1707504017.135:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.134827 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:40:17.135992 systemd[1]: Stopped target initrd.target. Feb 9 18:40:17.138944 systemd[1]: Stopped target basic.target. Feb 9 18:40:17.140609 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:40:17.141884 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:40:17.143007 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:40:17.144260 systemd[1]: Stopped target remote-fs.target. Feb 9 18:40:17.145396 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:40:17.146561 systemd[1]: Stopped target sysinit.target. Feb 9 18:40:17.147623 systemd[1]: Stopped target local-fs.target. Feb 9 18:40:17.148666 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:40:17.149781 systemd[1]: Stopped target swap.target. Feb 9 18:40:17.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.150920 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:40:17.155634 kernel: audit: type=1131 audit(1707504017.152:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.151033 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:40:17.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.152483 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:40:17.159495 kernel: audit: type=1131 audit(1707504017.155:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.155109 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:40:17.155208 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:40:17.156296 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:40:17.156390 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:40:17.159201 systemd[1]: Stopped target paths.target. Feb 9 18:40:17.160169 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:40:17.165271 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:40:17.166717 systemd[1]: Stopped target slices.target. Feb 9 18:40:17.167461 systemd[1]: Stopped target sockets.target. Feb 9 18:40:17.168504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:40:17.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.168615 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:40:17.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.169751 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:40:17.169840 systemd[1]: Stopped ignition-files.service. Feb 9 18:40:17.173083 iscsid[744]: iscsid shutting down. Feb 9 18:40:17.171726 systemd[1]: Stopping ignition-mount.service... Feb 9 18:40:17.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.172789 systemd[1]: Stopping iscsid.service... Feb 9 18:40:17.173467 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:40:17.173566 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:40:17.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.179402 ignition[894]: INFO : Ignition 2.14.0 Feb 9 18:40:17.179402 ignition[894]: INFO : Stage: umount Feb 9 18:40:17.179402 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:40:17.179402 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:40:17.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.175316 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:40:17.184250 ignition[894]: INFO : umount: umount passed Feb 9 18:40:17.184250 ignition[894]: INFO : Ignition finished successfully Feb 9 18:40:17.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.175859 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:40:17.175991 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:40:17.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.177083 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:40:17.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.177172 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:40:17.180348 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:40:17.180443 systemd[1]: Stopped iscsid.service. Feb 9 18:40:17.181577 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:40:17.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.181642 systemd[1]: Closed iscsid.socket. Feb 9 18:40:17.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.183016 systemd[1]: Stopping iscsiuio.service... Feb 9 18:40:17.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.183912 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:40:17.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.183998 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:40:17.185675 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:40:17.186052 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:40:17.186140 systemd[1]: Stopped iscsiuio.service. Feb 9 18:40:17.186959 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:40:17.187038 systemd[1]: Stopped ignition-mount.service. Feb 9 18:40:17.188086 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:40:17.188164 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:40:17.189675 systemd[1]: Stopped target network.target. Feb 9 18:40:17.190687 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:40:17.190721 systemd[1]: Closed iscsiuio.socket. Feb 9 18:40:17.191616 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:40:17.191655 systemd[1]: Stopped ignition-disks.service. Feb 9 18:40:17.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.192691 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:40:17.192728 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:40:17.193774 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:40:17.193809 systemd[1]: Stopped ignition-setup.service. Feb 9 18:40:17.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.194766 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:40:17.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.194802 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:40:17.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.196004 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:40:17.196863 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:40:17.202281 systemd-networkd[737]: eth0: DHCPv6 lease lost Feb 9 18:40:17.203475 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:40:17.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.203559 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:40:17.204677 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:40:17.204706 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:40:17.206145 systemd[1]: Stopping network-cleanup.service... Feb 9 18:40:17.207240 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:40:17.220000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:40:17.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.207292 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:40:17.209197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:40:17.209253 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:40:17.210810 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:40:17.210847 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:40:17.224000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:40:17.212563 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:40:17.215574 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:40:17.215984 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:40:17.216070 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:40:17.219708 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:40:17.219795 systemd[1]: Stopped network-cleanup.service. Feb 9 18:40:17.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.229542 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:40:17.229652 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:40:17.230354 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:40:17.230390 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:40:17.231533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:40:17.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.231560 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:40:17.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.232472 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:40:17.232508 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:40:17.233614 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:40:17.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.233647 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:40:17.234716 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:40:17.234750 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:40:17.236664 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:40:17.237625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:40:17.237670 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:40:17.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.241912 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:40:17.241996 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:40:17.243379 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:40:17.245181 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:40:17.251305 systemd[1]: Switching root. Feb 9 18:40:17.269492 systemd-journald[290]: Journal stopped Feb 9 18:40:19.357854 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 9 18:40:19.357930 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:40:19.357943 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:40:19.357953 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:40:19.357963 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:40:19.357973 kernel: SELinux: policy capability open_perms=1 Feb 9 18:40:19.357986 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:40:19.357997 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:40:19.358006 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:40:19.358016 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:40:19.358025 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:40:19.358035 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:40:19.358045 systemd[1]: Successfully loaded SELinux policy in 32.722ms. Feb 9 18:40:19.358066 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.138ms. Feb 9 18:40:19.358078 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:40:19.358090 systemd[1]: Detected virtualization kvm. Feb 9 18:40:19.358101 systemd[1]: Detected architecture arm64. Feb 9 18:40:19.358111 systemd[1]: Detected first boot. Feb 9 18:40:19.358123 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:40:19.358137 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:40:19.358148 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:40:19.358159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:40:19.358172 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:40:19.358184 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:40:19.358195 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:40:19.358206 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:40:19.358219 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:40:19.358266 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:40:19.358278 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:40:19.358289 systemd[1]: Created slice system-getty.slice. Feb 9 18:40:19.358300 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:40:19.358311 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:40:19.358322 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:40:19.358332 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:40:19.358343 systemd[1]: Created slice user.slice. Feb 9 18:40:19.358354 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:40:19.358365 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:40:19.358375 systemd[1]: Set up automount boot.automount. Feb 9 18:40:19.358388 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:40:19.358399 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:40:19.358410 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:40:19.358421 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:40:19.358431 systemd[1]: Reached target integritysetup.target. Feb 9 18:40:19.358441 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:40:19.358452 systemd[1]: Reached target remote-fs.target. Feb 9 18:40:19.358462 systemd[1]: Reached target slices.target. Feb 9 18:40:19.358474 systemd[1]: Reached target swap.target. Feb 9 18:40:19.358484 systemd[1]: Reached target torcx.target. Feb 9 18:40:19.358495 systemd[1]: Reached target veritysetup.target. Feb 9 18:40:19.358505 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:40:19.358516 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:40:19.358526 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:40:19.358536 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:40:19.358547 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:40:19.358559 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:40:19.358571 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:40:19.358581 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:40:19.358592 systemd[1]: Mounting media.mount... Feb 9 18:40:19.358602 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:40:19.358614 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:40:19.358625 systemd[1]: Mounting tmp.mount... Feb 9 18:40:19.358636 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:40:19.358646 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:40:19.358657 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:40:19.358668 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:40:19.358680 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:40:19.358691 systemd[1]: Starting modprobe@drm.service... Feb 9 18:40:19.358702 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:40:19.358713 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:40:19.358724 systemd[1]: Starting modprobe@loop.service... Feb 9 18:40:19.358735 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:40:19.358746 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:40:19.358756 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:40:19.358768 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:40:19.358778 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:40:19.358789 systemd[1]: Stopped systemd-journald.service. Feb 9 18:40:19.358799 kernel: fuse: init (API version 7.34) Feb 9 18:40:19.358809 systemd[1]: Starting systemd-journald.service... Feb 9 18:40:19.358820 kernel: loop: module loaded Feb 9 18:40:19.358832 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:40:19.358844 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:40:19.358856 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:40:19.358866 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:40:19.358877 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:40:19.358887 systemd[1]: Stopped verity-setup.service. Feb 9 18:40:19.358904 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:40:19.358915 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:40:19.358926 systemd[1]: Mounted media.mount. Feb 9 18:40:19.358938 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:40:19.358948 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:40:19.358960 systemd-journald[985]: Journal started Feb 9 18:40:19.359001 systemd-journald[985]: Runtime Journal (/run/log/journal/0130407c83ab480990ea691c641d67fa) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:40:17.332000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:40:17.512000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:40:17.512000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:40:17.512000 audit: BPF prog-id=10 op=LOAD Feb 9 18:40:17.512000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:40:17.512000 audit: BPF prog-id=11 op=LOAD Feb 9 18:40:17.512000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:40:17.552000 audit[926]: AVC avc: denied { associate } for pid=926 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:40:17.552000 audit[926]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cd8d4 a1=4000150de0 a2=40001570c0 a3=32 items=0 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:40:17.552000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:40:17.553000 audit[926]: AVC avc: denied { associate } for pid=926 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:40:17.553000 audit[926]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cd9b9 a2=1ed a3=0 items=2 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:40:17.553000 audit: CWD cwd="/" Feb 9 18:40:17.553000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:40:17.553000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:40:17.553000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:40:19.240000 audit: BPF prog-id=12 op=LOAD Feb 9 18:40:19.240000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:40:19.240000 audit: BPF prog-id=13 op=LOAD Feb 9 18:40:19.240000 audit: BPF prog-id=14 op=LOAD Feb 9 18:40:19.240000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:40:19.240000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:40:19.241000 audit: BPF prog-id=15 op=LOAD Feb 9 18:40:19.241000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:40:19.241000 audit: BPF prog-id=16 op=LOAD Feb 9 18:40:19.241000 audit: BPF prog-id=17 op=LOAD Feb 9 18:40:19.241000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:40:19.241000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:40:19.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.251000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:40:19.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.328000 audit: BPF prog-id=18 op=LOAD Feb 9 18:40:19.332000 audit: BPF prog-id=19 op=LOAD Feb 9 18:40:19.332000 audit: BPF prog-id=20 op=LOAD Feb 9 18:40:19.332000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:40:19.332000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:40:19.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.356000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:40:19.356000 audit[985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff26b80f0 a2=4000 a3=1 items=0 ppid=1 pid=985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:40:19.356000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:40:19.239528 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:40:17.551351 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:40:19.239540 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:40:17.551600 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:40:19.242874 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:40:17.551626 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:40:17.551657 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:40:17.551668 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:40:19.360578 systemd[1]: Started systemd-journald.service. Feb 9 18:40:17.551695 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:40:17.551706 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:40:19.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:17.551901 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:40:17.551947 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:40:17.551959 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:40:17.552371 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:40:17.552403 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:40:19.361035 systemd[1]: Mounted tmp.mount. Feb 9 18:40:17.552421 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:40:17.552435 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:40:17.552452 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:40:17.552465 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:40:18.970734 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:40:18.970998 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:40:18.971090 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:40:18.971261 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:40:18.971312 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:40:18.971367 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-02-09T18:40:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:40:19.362135 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:40:19.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.363077 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:40:19.363258 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:40:19.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.364291 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:40:19.364465 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:40:19.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.365470 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:40:19.365629 systemd[1]: Finished modprobe@drm.service. Feb 9 18:40:19.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.366573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:40:19.366734 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:40:19.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.367932 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:40:19.368086 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:40:19.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.368987 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:40:19.369137 systemd[1]: Finished modprobe@loop.service. Feb 9 18:40:19.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.370171 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:40:19.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.371259 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:40:19.372418 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:40:19.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.373603 systemd[1]: Reached target network-pre.target. Feb 9 18:40:19.375592 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:40:19.377481 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:40:19.378135 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:40:19.380431 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:40:19.382576 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:40:19.383293 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:40:19.384363 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:40:19.385116 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:40:19.389520 systemd-journald[985]: Time spent on flushing to /var/log/journal/0130407c83ab480990ea691c641d67fa is 12.309ms for 1020 entries. Feb 9 18:40:19.389520 systemd-journald[985]: System Journal (/var/log/journal/0130407c83ab480990ea691c641d67fa) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:40:19.414947 systemd-journald[985]: Received client request to flush runtime journal. Feb 9 18:40:19.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.386394 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:40:19.388520 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:40:19.389598 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:40:19.400354 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:40:19.401274 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:40:19.406870 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:40:19.414942 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:40:19.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.416045 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:40:19.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.417939 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:40:19.418833 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:40:19.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.422023 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:40:19.429311 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:40:19.433083 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:40:19.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.741271 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:40:19.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.742000 audit: BPF prog-id=21 op=LOAD Feb 9 18:40:19.742000 audit: BPF prog-id=22 op=LOAD Feb 9 18:40:19.742000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:40:19.742000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:40:19.743428 systemd[1]: Starting systemd-udevd.service... Feb 9 18:40:19.758809 systemd-udevd[1030]: Using default interface naming scheme 'v252'. Feb 9 18:40:19.770198 systemd[1]: Started systemd-udevd.service. Feb 9 18:40:19.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.771000 audit: BPF prog-id=23 op=LOAD Feb 9 18:40:19.773556 systemd[1]: Starting systemd-networkd.service... Feb 9 18:40:19.792000 audit: BPF prog-id=24 op=LOAD Feb 9 18:40:19.792000 audit: BPF prog-id=25 op=LOAD Feb 9 18:40:19.792000 audit: BPF prog-id=26 op=LOAD Feb 9 18:40:19.794079 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:40:19.798157 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 18:40:19.818359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:40:19.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.828210 systemd[1]: Started systemd-userdbd.service. Feb 9 18:40:19.873286 systemd-networkd[1038]: lo: Link UP Feb 9 18:40:19.873296 systemd-networkd[1038]: lo: Gained carrier Feb 9 18:40:19.873648 systemd-networkd[1038]: Enumeration completed Feb 9 18:40:19.873742 systemd[1]: Started systemd-networkd.service. Feb 9 18:40:19.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.874613 systemd-networkd[1038]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:40:19.875695 systemd-networkd[1038]: eth0: Link UP Feb 9 18:40:19.875705 systemd-networkd[1038]: eth0: Gained carrier Feb 9 18:40:19.885585 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:40:19.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.887611 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:40:19.900858 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:40:19.904366 systemd-networkd[1038]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:40:19.923021 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:40:19.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.923831 systemd[1]: Reached target cryptsetup.target. Feb 9 18:40:19.925532 systemd[1]: Starting lvm2-activation.service... Feb 9 18:40:19.929147 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:40:19.961079 systemd[1]: Finished lvm2-activation.service. Feb 9 18:40:19.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.961826 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:40:19.962462 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:40:19.962488 systemd[1]: Reached target local-fs.target. Feb 9 18:40:19.963039 systemd[1]: Reached target machines.target. Feb 9 18:40:19.964789 systemd[1]: Starting ldconfig.service... Feb 9 18:40:19.965647 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:40:19.965722 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:40:19.966890 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:40:19.968747 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:40:19.970808 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:40:19.971665 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:40:19.971742 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:40:19.972910 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:40:19.981001 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1066 (bootctl) Feb 9 18:40:19.982237 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:40:19.990300 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:40:19.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:19.996694 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:40:19.998534 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:40:20.001589 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:40:20.083251 systemd-fsck[1076]: fsck.fat 4.2 (2021-01-31) Feb 9 18:40:20.083251 systemd-fsck[1076]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 18:40:20.086309 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:40:20.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.089067 systemd[1]: Mounting boot.mount... Feb 9 18:40:20.126206 systemd[1]: Mounted boot.mount. Feb 9 18:40:20.136606 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:40:20.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.137834 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:40:20.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.170748 ldconfig[1065]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:40:20.174864 systemd[1]: Finished ldconfig.service. Feb 9 18:40:20.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.205124 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:40:20.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.207414 systemd[1]: Starting audit-rules.service... Feb 9 18:40:20.209113 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:40:20.211028 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:40:20.212000 audit: BPF prog-id=27 op=LOAD Feb 9 18:40:20.213658 systemd[1]: Starting systemd-resolved.service... Feb 9 18:40:20.216000 audit: BPF prog-id=28 op=LOAD Feb 9 18:40:20.217692 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:40:20.220892 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:40:20.222202 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:40:20.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.223461 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:40:20.227000 audit[1090]: SYSTEM_BOOT pid=1090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.231301 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:40:20.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.235742 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:40:20.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.238080 systemd[1]: Starting systemd-update-done.service... Feb 9 18:40:20.249089 systemd[1]: Finished systemd-update-done.service. Feb 9 18:40:20.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:40:20.257000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:40:20.257000 audit[1101]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffceb9e860 a2=420 a3=0 items=0 ppid=1079 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:40:20.257000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:40:20.258340 augenrules[1101]: No rules Feb 9 18:40:20.259279 systemd[1]: Finished audit-rules.service. Feb 9 18:40:20.265168 systemd-resolved[1083]: Positive Trust Anchors: Feb 9 18:40:20.265182 systemd-resolved[1083]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:40:20.265209 systemd-resolved[1083]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:40:20.277039 systemd-resolved[1083]: Defaulting to hostname 'linux'. Feb 9 18:40:20.278509 systemd[1]: Started systemd-resolved.service. Feb 9 18:40:20.279203 systemd[1]: Reached target network.target. Feb 9 18:40:20.279760 systemd[1]: Reached target nss-lookup.target. Feb 9 18:40:20.280800 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:40:20.281478 systemd[1]: Reached target sysinit.target. Feb 9 18:40:20.282471 systemd[1]: Started motdgen.path. Feb 9 18:40:20.282509 systemd-timesyncd[1089]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:40:20.282556 systemd-timesyncd[1089]: Initial clock synchronization to Fri 2024-02-09 18:40:20.358827 UTC. Feb 9 18:40:20.283100 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:40:20.283983 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:40:20.284672 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:40:20.284710 systemd[1]: Reached target paths.target. Feb 9 18:40:20.285363 systemd[1]: Reached target time-set.target. Feb 9 18:40:20.286219 systemd[1]: Started logrotate.timer. Feb 9 18:40:20.287028 systemd[1]: Started mdadm.timer. Feb 9 18:40:20.287634 systemd[1]: Reached target timers.target. Feb 9 18:40:20.288646 systemd[1]: Listening on dbus.socket. Feb 9 18:40:20.290463 systemd[1]: Starting docker.socket... Feb 9 18:40:20.293486 systemd[1]: Listening on sshd.socket. Feb 9 18:40:20.294257 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:40:20.294745 systemd[1]: Listening on docker.socket. Feb 9 18:40:20.295494 systemd[1]: Reached target sockets.target. Feb 9 18:40:20.296189 systemd[1]: Reached target basic.target. Feb 9 18:40:20.296811 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:40:20.296838 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:40:20.297756 systemd[1]: Starting containerd.service... Feb 9 18:40:20.299408 systemd[1]: Starting dbus.service... Feb 9 18:40:20.301104 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:40:20.303098 systemd[1]: Starting extend-filesystems.service... Feb 9 18:40:20.303973 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:40:20.305194 systemd[1]: Starting motdgen.service... Feb 9 18:40:20.307065 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:40:20.308376 jq[1111]: false Feb 9 18:40:20.310881 systemd[1]: Starting prepare-critools.service... Feb 9 18:40:20.313785 systemd[1]: Starting prepare-helm.service... Feb 9 18:40:20.315469 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:40:20.317149 systemd[1]: Starting sshd-keygen.service... Feb 9 18:40:20.319784 systemd[1]: Starting systemd-logind.service... Feb 9 18:40:20.320620 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:40:20.320695 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:40:20.321117 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:40:20.321799 systemd[1]: Starting update-engine.service... Feb 9 18:40:20.324268 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:40:20.326428 jq[1132]: true Feb 9 18:40:20.327635 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:40:20.327813 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:40:20.328122 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:40:20.328299 systemd[1]: Finished motdgen.service. Feb 9 18:40:20.331141 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:40:20.331338 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:40:20.341604 extend-filesystems[1112]: Found vda Feb 9 18:40:20.342659 jq[1137]: true Feb 9 18:40:20.346273 extend-filesystems[1112]: Found vda1 Feb 9 18:40:20.346273 extend-filesystems[1112]: Found vda2 Feb 9 18:40:20.346273 extend-filesystems[1112]: Found vda3 Feb 9 18:40:20.346273 extend-filesystems[1112]: Found usr Feb 9 18:40:20.346273 extend-filesystems[1112]: Found vda4 Feb 9 18:40:20.346273 extend-filesystems[1112]: Found vda6 Feb 9 18:40:20.346273 extend-filesystems[1112]: Found vda7 Feb 9 18:40:20.346273 extend-filesystems[1112]: Found vda9 Feb 9 18:40:20.353995 extend-filesystems[1112]: Checking size of /dev/vda9 Feb 9 18:40:20.351220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:40:20.354991 tar[1134]: ./ Feb 9 18:40:20.354991 tar[1134]: ./macvlan Feb 9 18:40:20.355197 tar[1136]: linux-arm64/helm Feb 9 18:40:20.356888 tar[1135]: crictl Feb 9 18:40:20.362267 dbus-daemon[1110]: [system] SELinux support is enabled Feb 9 18:40:20.362582 systemd[1]: Started dbus.service. Feb 9 18:40:20.364771 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:40:20.364795 systemd[1]: Reached target system-config.target. Feb 9 18:40:20.365454 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:40:20.365475 systemd[1]: Reached target user-config.target. Feb 9 18:40:20.385131 extend-filesystems[1112]: Resized partition /dev/vda9 Feb 9 18:40:20.394349 extend-filesystems[1161]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:40:20.400801 systemd-logind[1127]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 18:40:20.403327 systemd-logind[1127]: New seat seat0. Feb 9 18:40:20.405873 systemd[1]: Started systemd-logind.service. Feb 9 18:40:20.406449 tar[1134]: ./static Feb 9 18:40:20.411238 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:40:20.438255 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:40:20.439246 update_engine[1130]: I0209 18:40:20.439020 1130 main.cc:92] Flatcar Update Engine starting Feb 9 18:40:20.441393 systemd[1]: Started update-engine.service. Feb 9 18:40:20.444036 systemd[1]: Started locksmithd.service. Feb 9 18:40:20.452677 update_engine[1130]: I0209 18:40:20.443352 1130 update_check_scheduler.cc:74] Next update check in 8m59s Feb 9 18:40:20.453275 extend-filesystems[1161]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:40:20.453275 extend-filesystems[1161]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:40:20.453275 extend-filesystems[1161]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:40:20.457172 bash[1158]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:40:20.454705 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:40:20.457325 extend-filesystems[1112]: Resized filesystem in /dev/vda9 Feb 9 18:40:20.454865 systemd[1]: Finished extend-filesystems.service. Feb 9 18:40:20.457964 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:40:20.476442 env[1138]: time="2024-02-09T18:40:20.476391160Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:40:20.488340 tar[1134]: ./vlan Feb 9 18:40:20.521248 tar[1134]: ./portmap Feb 9 18:40:20.530524 env[1138]: time="2024-02-09T18:40:20.530472040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:40:20.530738 env[1138]: time="2024-02-09T18:40:20.530648320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:40:20.533472 env[1138]: time="2024-02-09T18:40:20.533425120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:40:20.533472 env[1138]: time="2024-02-09T18:40:20.533467000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:40:20.533847 env[1138]: time="2024-02-09T18:40:20.533817720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:40:20.533847 env[1138]: time="2024-02-09T18:40:20.533843680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:40:20.533915 env[1138]: time="2024-02-09T18:40:20.533858280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:40:20.533915 env[1138]: time="2024-02-09T18:40:20.533869480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:40:20.534163 env[1138]: time="2024-02-09T18:40:20.534138760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:40:20.534629 env[1138]: time="2024-02-09T18:40:20.534604920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:40:20.534848 env[1138]: time="2024-02-09T18:40:20.534820400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:40:20.534883 env[1138]: time="2024-02-09T18:40:20.534848560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:40:20.534940 env[1138]: time="2024-02-09T18:40:20.534921600Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:40:20.534983 env[1138]: time="2024-02-09T18:40:20.534938800Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:40:20.539641 env[1138]: time="2024-02-09T18:40:20.539608960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:40:20.539715 env[1138]: time="2024-02-09T18:40:20.539643800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:40:20.539715 env[1138]: time="2024-02-09T18:40:20.539657960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:40:20.539715 env[1138]: time="2024-02-09T18:40:20.539688400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.539715 env[1138]: time="2024-02-09T18:40:20.539704080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.539803 env[1138]: time="2024-02-09T18:40:20.539717080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.539803 env[1138]: time="2024-02-09T18:40:20.539731120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.540246 env[1138]: time="2024-02-09T18:40:20.540204360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.540300 env[1138]: time="2024-02-09T18:40:20.540246360Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.540300 env[1138]: time="2024-02-09T18:40:20.540262960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.540300 env[1138]: time="2024-02-09T18:40:20.540277320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.540300 env[1138]: time="2024-02-09T18:40:20.540290640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:40:20.540447 env[1138]: time="2024-02-09T18:40:20.540401000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:40:20.540548 env[1138]: time="2024-02-09T18:40:20.540527000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:40:20.540845 env[1138]: time="2024-02-09T18:40:20.540822840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:40:20.540877 env[1138]: time="2024-02-09T18:40:20.540856880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541063 env[1138]: time="2024-02-09T18:40:20.541040320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:40:20.541175 env[1138]: time="2024-02-09T18:40:20.541162000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541202 env[1138]: time="2024-02-09T18:40:20.541178920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541202 env[1138]: time="2024-02-09T18:40:20.541191440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541272 env[1138]: time="2024-02-09T18:40:20.541204560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541272 env[1138]: time="2024-02-09T18:40:20.541217040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541564 env[1138]: time="2024-02-09T18:40:20.541540240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541564 env[1138]: time="2024-02-09T18:40:20.541563520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541635 env[1138]: time="2024-02-09T18:40:20.541577880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541635 env[1138]: time="2024-02-09T18:40:20.541594120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:40:20.541737 env[1138]: time="2024-02-09T18:40:20.541716520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541769 env[1138]: time="2024-02-09T18:40:20.541737560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541769 env[1138]: time="2024-02-09T18:40:20.541750440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.541769 env[1138]: time="2024-02-09T18:40:20.541761840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:40:20.541829 env[1138]: time="2024-02-09T18:40:20.541775680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:40:20.541829 env[1138]: time="2024-02-09T18:40:20.541787440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:40:20.541829 env[1138]: time="2024-02-09T18:40:20.541803160Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:40:20.541888 env[1138]: time="2024-02-09T18:40:20.541840280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:40:20.542977 env[1138]: time="2024-02-09T18:40:20.542914680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.542981160Z" level=info msg="Connect containerd service" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.543012960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544414840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544702320Z" level=info msg="Start subscribing containerd event" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544753800Z" level=info msg="Start recovering state" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544830640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544840560Z" level=info msg="Start event monitor" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544867400Z" level=info msg="Start snapshots syncer" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544870000Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544878120Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544887040Z" level=info msg="Start streaming server" Feb 9 18:40:20.545606 env[1138]: time="2024-02-09T18:40:20.544927680Z" level=info msg="containerd successfully booted in 0.069598s" Feb 9 18:40:20.545011 systemd[1]: Started containerd.service. Feb 9 18:40:20.551755 tar[1134]: ./host-local Feb 9 18:40:20.578397 tar[1134]: ./vrf Feb 9 18:40:20.607522 tar[1134]: ./bridge Feb 9 18:40:20.641384 tar[1134]: ./tuning Feb 9 18:40:20.669329 tar[1134]: ./firewall Feb 9 18:40:20.703787 tar[1134]: ./host-device Feb 9 18:40:20.734523 tar[1134]: ./sbr Feb 9 18:40:20.762327 tar[1134]: ./loopback Feb 9 18:40:20.789459 tar[1134]: ./dhcp Feb 9 18:40:20.798639 tar[1136]: linux-arm64/LICENSE Feb 9 18:40:20.798718 tar[1136]: linux-arm64/README.md Feb 9 18:40:20.802902 systemd[1]: Finished prepare-helm.service. Feb 9 18:40:20.803492 locksmithd[1168]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:40:20.865564 tar[1134]: ./ptp Feb 9 18:40:20.898433 tar[1134]: ./ipvlan Feb 9 18:40:20.925815 tar[1134]: ./bandwidth Feb 9 18:40:20.937158 systemd[1]: Finished prepare-critools.service. Feb 9 18:40:20.963128 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:40:21.475328 systemd-networkd[1038]: eth0: Gained IPv6LL Feb 9 18:40:21.922627 sshd_keygen[1131]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:40:21.939729 systemd[1]: Finished sshd-keygen.service. Feb 9 18:40:21.941946 systemd[1]: Starting issuegen.service... Feb 9 18:40:21.946285 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:40:21.946435 systemd[1]: Finished issuegen.service. Feb 9 18:40:21.948520 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:40:21.954089 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:40:21.956130 systemd[1]: Started getty@tty1.service. Feb 9 18:40:21.957925 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:40:21.958849 systemd[1]: Reached target getty.target. Feb 9 18:40:21.959624 systemd[1]: Reached target multi-user.target. Feb 9 18:40:21.961499 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:40:21.967774 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:40:21.967922 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:40:21.968873 systemd[1]: Startup finished in 563ms (kernel) + 5.715s (initrd) + 4.676s (userspace) = 10.956s. Feb 9 18:40:24.148400 systemd[1]: Created slice system-sshd.slice. Feb 9 18:40:24.149604 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:52522.service. Feb 9 18:40:24.197000 sshd[1198]: Accepted publickey for core from 10.0.0.1 port 52522 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:24.198969 sshd[1198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:24.208029 systemd-logind[1127]: New session 1 of user core. Feb 9 18:40:24.209012 systemd[1]: Created slice user-500.slice. Feb 9 18:40:24.210264 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:40:24.218283 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:40:24.219745 systemd[1]: Starting user@500.service... Feb 9 18:40:24.222447 (systemd)[1201]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:24.283958 systemd[1201]: Queued start job for default target default.target. Feb 9 18:40:24.284444 systemd[1201]: Reached target paths.target. Feb 9 18:40:24.284463 systemd[1201]: Reached target sockets.target. Feb 9 18:40:24.284474 systemd[1201]: Reached target timers.target. Feb 9 18:40:24.284484 systemd[1201]: Reached target basic.target. Feb 9 18:40:24.284533 systemd[1201]: Reached target default.target. Feb 9 18:40:24.284557 systemd[1201]: Startup finished in 56ms. Feb 9 18:40:24.284741 systemd[1]: Started user@500.service. Feb 9 18:40:24.285705 systemd[1]: Started session-1.scope. Feb 9 18:40:24.336567 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:52530.service. Feb 9 18:40:24.389896 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 52530 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:24.390999 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:24.394206 systemd-logind[1127]: New session 2 of user core. Feb 9 18:40:24.395045 systemd[1]: Started session-2.scope. Feb 9 18:40:24.447663 sshd[1210]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:24.449973 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:52530.service: Deactivated successfully. Feb 9 18:40:24.450543 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:40:24.451018 systemd-logind[1127]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:40:24.452226 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:52542.service. Feb 9 18:40:24.452848 systemd-logind[1127]: Removed session 2. Feb 9 18:40:24.494983 sshd[1216]: Accepted publickey for core from 10.0.0.1 port 52542 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:24.496150 sshd[1216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:24.499504 systemd-logind[1127]: New session 3 of user core. Feb 9 18:40:24.500366 systemd[1]: Started session-3.scope. Feb 9 18:40:24.550710 sshd[1216]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:24.554006 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:52542.service: Deactivated successfully. Feb 9 18:40:24.554623 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:40:24.555123 systemd-logind[1127]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:40:24.556153 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:52544.service. Feb 9 18:40:24.556973 systemd-logind[1127]: Removed session 3. Feb 9 18:40:24.599299 sshd[1222]: Accepted publickey for core from 10.0.0.1 port 52544 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:24.600482 sshd[1222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:24.603738 systemd-logind[1127]: New session 4 of user core. Feb 9 18:40:24.604609 systemd[1]: Started session-4.scope. Feb 9 18:40:24.657443 sshd[1222]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:24.660864 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:52554.service. Feb 9 18:40:24.661382 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:52544.service: Deactivated successfully. Feb 9 18:40:24.661942 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:40:24.662451 systemd-logind[1127]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:40:24.663160 systemd-logind[1127]: Removed session 4. Feb 9 18:40:24.702812 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 52554 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:40:24.703782 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:40:24.706511 systemd-logind[1127]: New session 5 of user core. Feb 9 18:40:24.707272 systemd[1]: Started session-5.scope. Feb 9 18:40:24.766928 sudo[1232]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:40:24.767363 sudo[1232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:40:25.664138 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:40:25.670911 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:40:25.671257 systemd[1]: Reached target network-online.target. Feb 9 18:40:25.672459 systemd[1]: Starting docker.service... Feb 9 18:40:25.756856 env[1250]: time="2024-02-09T18:40:25.756796024Z" level=info msg="Starting up" Feb 9 18:40:25.758435 env[1250]: time="2024-02-09T18:40:25.758407435Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:40:25.758528 env[1250]: time="2024-02-09T18:40:25.758504898Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:40:25.758597 env[1250]: time="2024-02-09T18:40:25.758580438Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:40:25.758645 env[1250]: time="2024-02-09T18:40:25.758633333Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:40:25.760647 env[1250]: time="2024-02-09T18:40:25.760621683Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:40:25.760647 env[1250]: time="2024-02-09T18:40:25.760642358Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:40:25.760724 env[1250]: time="2024-02-09T18:40:25.760657240Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:40:25.760724 env[1250]: time="2024-02-09T18:40:25.760666733Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:40:25.911874 env[1250]: time="2024-02-09T18:40:25.911829782Z" level=info msg="Loading containers: start." Feb 9 18:40:26.012252 kernel: Initializing XFRM netlink socket Feb 9 18:40:26.037311 env[1250]: time="2024-02-09T18:40:26.037277222Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:40:26.104449 systemd-networkd[1038]: docker0: Link UP Feb 9 18:40:26.112914 env[1250]: time="2024-02-09T18:40:26.112885636Z" level=info msg="Loading containers: done." Feb 9 18:40:26.135291 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3137921269-merged.mount: Deactivated successfully. Feb 9 18:40:26.139419 env[1250]: time="2024-02-09T18:40:26.139377980Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:40:26.139574 env[1250]: time="2024-02-09T18:40:26.139544914Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:40:26.139666 env[1250]: time="2024-02-09T18:40:26.139643314Z" level=info msg="Daemon has completed initialization" Feb 9 18:40:26.152287 systemd[1]: Started docker.service. Feb 9 18:40:26.158036 env[1250]: time="2024-02-09T18:40:26.157920692Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:40:26.173901 systemd[1]: Reloading. Feb 9 18:40:26.218471 /usr/lib/systemd/system-generators/torcx-generator[1392]: time="2024-02-09T18:40:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:40:26.218783 /usr/lib/systemd/system-generators/torcx-generator[1392]: time="2024-02-09T18:40:26Z" level=info msg="torcx already run" Feb 9 18:40:26.273596 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:40:26.273614 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:40:26.291257 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:40:26.361096 systemd[1]: Started kubelet.service. Feb 9 18:40:26.521210 kubelet[1429]: E0209 18:40:26.521139 1429 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:40:26.523854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:40:26.523973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:40:26.744501 env[1138]: time="2024-02-09T18:40:26.744391201Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 18:40:27.452728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126411252.mount: Deactivated successfully. Feb 9 18:40:29.053967 env[1138]: time="2024-02-09T18:40:29.053913387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:29.055501 env[1138]: time="2024-02-09T18:40:29.055457237Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:29.057301 env[1138]: time="2024-02-09T18:40:29.057263064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:29.059103 env[1138]: time="2024-02-09T18:40:29.059072141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:29.059841 env[1138]: time="2024-02-09T18:40:29.059807667Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 18:40:29.070383 env[1138]: time="2024-02-09T18:40:29.070354726Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 18:40:30.979674 env[1138]: time="2024-02-09T18:40:30.979611178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:30.985326 env[1138]: time="2024-02-09T18:40:30.985288033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:30.987006 env[1138]: time="2024-02-09T18:40:30.986976066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:30.991680 env[1138]: time="2024-02-09T18:40:30.991639338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:30.992425 env[1138]: time="2024-02-09T18:40:30.992396546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 18:40:31.001719 env[1138]: time="2024-02-09T18:40:31.001690752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 18:40:32.111604 env[1138]: time="2024-02-09T18:40:32.111543171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:32.113608 env[1138]: time="2024-02-09T18:40:32.113577073Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:32.115858 env[1138]: time="2024-02-09T18:40:32.115823200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:32.117925 env[1138]: time="2024-02-09T18:40:32.117895226Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:32.118675 env[1138]: time="2024-02-09T18:40:32.118642665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 18:40:32.134021 env[1138]: time="2024-02-09T18:40:32.133948842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:40:33.204774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357643872.mount: Deactivated successfully. Feb 9 18:40:33.618900 env[1138]: time="2024-02-09T18:40:33.618773404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:33.620756 env[1138]: time="2024-02-09T18:40:33.620723187Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:33.622323 env[1138]: time="2024-02-09T18:40:33.622278934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:33.623606 env[1138]: time="2024-02-09T18:40:33.623577027Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:33.623957 env[1138]: time="2024-02-09T18:40:33.623932629Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:40:33.632965 env[1138]: time="2024-02-09T18:40:33.632941166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:40:34.090014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059856064.mount: Deactivated successfully. Feb 9 18:40:34.094527 env[1138]: time="2024-02-09T18:40:34.094487439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:34.095796 env[1138]: time="2024-02-09T18:40:34.095761981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:34.097568 env[1138]: time="2024-02-09T18:40:34.097537605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:34.099361 env[1138]: time="2024-02-09T18:40:34.099327612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:34.099981 env[1138]: time="2024-02-09T18:40:34.099941163Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:40:34.108875 env[1138]: time="2024-02-09T18:40:34.108850334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 18:40:34.848318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070464906.mount: Deactivated successfully. Feb 9 18:40:36.723952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:40:36.724145 systemd[1]: Stopped kubelet.service. Feb 9 18:40:36.725736 systemd[1]: Started kubelet.service. Feb 9 18:40:36.761037 env[1138]: time="2024-02-09T18:40:36.760968503Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:36.765149 env[1138]: time="2024-02-09T18:40:36.765107830Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:36.766983 env[1138]: time="2024-02-09T18:40:36.766948159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:36.768862 env[1138]: time="2024-02-09T18:40:36.768831623Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:36.769556 env[1138]: time="2024-02-09T18:40:36.769522512Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 18:40:36.772962 kubelet[1485]: E0209 18:40:36.772209 1485 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:40:36.777487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:40:36.777629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:40:36.780423 env[1138]: time="2024-02-09T18:40:36.780375440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 18:40:37.376953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93537950.mount: Deactivated successfully. Feb 9 18:40:37.829119 env[1138]: time="2024-02-09T18:40:37.829056355Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:37.830540 env[1138]: time="2024-02-09T18:40:37.830503865Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:37.831948 env[1138]: time="2024-02-09T18:40:37.831924025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:37.833113 env[1138]: time="2024-02-09T18:40:37.833067593Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:37.833751 env[1138]: time="2024-02-09T18:40:37.833720128Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 18:40:42.590601 systemd[1]: Stopped kubelet.service. Feb 9 18:40:42.604202 systemd[1]: Reloading. Feb 9 18:40:42.648442 /usr/lib/systemd/system-generators/torcx-generator[1590]: time="2024-02-09T18:40:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:40:42.648475 /usr/lib/systemd/system-generators/torcx-generator[1590]: time="2024-02-09T18:40:42Z" level=info msg="torcx already run" Feb 9 18:40:42.705081 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:40:42.705099 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:40:42.722857 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:40:42.805382 systemd[1]: Started kubelet.service. Feb 9 18:40:42.857701 kubelet[1629]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:40:42.857701 kubelet[1629]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:40:42.858339 kubelet[1629]: I0209 18:40:42.857735 1629 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:40:42.859311 kubelet[1629]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:40:42.859311 kubelet[1629]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:40:43.775764 kubelet[1629]: I0209 18:40:43.775722 1629 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:40:43.775764 kubelet[1629]: I0209 18:40:43.775754 1629 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:40:43.775979 kubelet[1629]: I0209 18:40:43.775965 1629 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:40:43.781769 kubelet[1629]: I0209 18:40:43.781743 1629 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:40:43.782353 kubelet[1629]: E0209 18:40:43.782337 1629 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.784267 kubelet[1629]: W0209 18:40:43.784240 1629 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:40:43.785003 kubelet[1629]: I0209 18:40:43.784980 1629 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:40:43.785742 kubelet[1629]: I0209 18:40:43.785722 1629 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:40:43.785808 kubelet[1629]: I0209 18:40:43.785791 1629 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:40:43.785927 kubelet[1629]: I0209 18:40:43.785918 1629 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:40:43.785962 kubelet[1629]: I0209 18:40:43.785932 1629 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:40:43.786135 kubelet[1629]: I0209 18:40:43.786113 1629 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:40:43.790459 kubelet[1629]: I0209 18:40:43.790435 1629 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:40:43.790459 kubelet[1629]: I0209 18:40:43.790456 1629 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:40:43.790664 kubelet[1629]: I0209 18:40:43.790646 1629 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:40:43.790664 kubelet[1629]: I0209 18:40:43.790661 1629 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:40:43.791431 kubelet[1629]: W0209 18:40:43.791384 1629 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.791460 kubelet[1629]: E0209 18:40:43.791438 1629 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.791797 kubelet[1629]: W0209 18:40:43.791759 1629 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.791829 kubelet[1629]: E0209 18:40:43.791799 1629 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.792478 kubelet[1629]: I0209 18:40:43.792458 1629 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:40:43.793931 kubelet[1629]: W0209 18:40:43.793913 1629 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:40:43.795109 kubelet[1629]: I0209 18:40:43.795082 1629 server.go:1186] "Started kubelet" Feb 9 18:40:43.798250 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:40:43.798419 kubelet[1629]: I0209 18:40:43.798400 1629 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:40:43.799838 kubelet[1629]: I0209 18:40:43.799821 1629 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:40:43.800597 kubelet[1629]: I0209 18:40:43.800486 1629 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:40:43.800597 kubelet[1629]: I0209 18:40:43.800563 1629 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:40:43.801531 kubelet[1629]: W0209 18:40:43.801487 1629 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.801531 kubelet[1629]: E0209 18:40:43.801532 1629 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.801628 kubelet[1629]: E0209 18:40:43.801536 1629 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b245e18eebfbc3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 40, 43, 794627523, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 40, 43, 794627523, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.114:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.114:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:40:43.801628 kubelet[1629]: E0209 18:40:43.801597 1629 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.802126 kubelet[1629]: E0209 18:40:43.801807 1629 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:40:43.802126 kubelet[1629]: E0209 18:40:43.801831 1629 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:40:43.803111 kubelet[1629]: I0209 18:40:43.803073 1629 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:40:43.821534 kubelet[1629]: I0209 18:40:43.821502 1629 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:40:43.821534 kubelet[1629]: I0209 18:40:43.821521 1629 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:40:43.821662 kubelet[1629]: I0209 18:40:43.821542 1629 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:40:43.825680 kubelet[1629]: I0209 18:40:43.823521 1629 policy_none.go:49] "None policy: Start" Feb 9 18:40:43.825680 kubelet[1629]: I0209 18:40:43.824041 1629 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:40:43.825680 kubelet[1629]: I0209 18:40:43.824068 1629 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:40:43.829928 systemd[1]: Created slice kubepods.slice. Feb 9 18:40:43.833845 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:40:43.836515 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:40:43.842064 kubelet[1629]: I0209 18:40:43.841976 1629 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:40:43.842271 kubelet[1629]: I0209 18:40:43.842250 1629 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:40:43.843380 kubelet[1629]: E0209 18:40:43.843357 1629 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 18:40:43.849103 kubelet[1629]: I0209 18:40:43.849086 1629 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:40:43.874006 kubelet[1629]: I0209 18:40:43.873983 1629 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:40:43.874006 kubelet[1629]: I0209 18:40:43.874007 1629 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:40:43.874332 kubelet[1629]: I0209 18:40:43.874024 1629 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:40:43.874332 kubelet[1629]: E0209 18:40:43.874079 1629 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:40:43.878103 kubelet[1629]: W0209 18:40:43.878080 1629 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.878159 kubelet[1629]: E0209 18:40:43.878113 1629 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:43.902250 kubelet[1629]: I0209 18:40:43.902214 1629 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:40:43.902679 kubelet[1629]: E0209 18:40:43.902662 1629 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Feb 9 18:40:43.974821 kubelet[1629]: I0209 18:40:43.974794 1629 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:43.975832 kubelet[1629]: I0209 18:40:43.975815 1629 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:43.976635 kubelet[1629]: I0209 18:40:43.976583 1629 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:43.978620 kubelet[1629]: I0209 18:40:43.978602 1629 status_manager.go:698] "Failed to get status for pod" podUID=e92bba974bb30f7961a45c73100fedf9 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.114:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.114:6443: connect: connection refused" Feb 9 18:40:43.978957 kubelet[1629]: I0209 18:40:43.978934 1629 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.114:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.114:6443: connect: connection refused" Feb 9 18:40:43.979664 kubelet[1629]: I0209 18:40:43.979634 1629 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.114:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.114:6443: connect: connection refused" Feb 9 18:40:43.981659 systemd[1]: Created slice kubepods-burstable-pode92bba974bb30f7961a45c73100fedf9.slice. Feb 9 18:40:43.992711 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 9 18:40:44.002165 kubelet[1629]: E0209 18:40:44.002140 1629 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:44.006306 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 9 18:40:44.102152 kubelet[1629]: I0209 18:40:44.102069 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:44.102375 kubelet[1629]: I0209 18:40:44.102359 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:40:44.102522 kubelet[1629]: I0209 18:40:44.102508 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e92bba974bb30f7961a45c73100fedf9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e92bba974bb30f7961a45c73100fedf9\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:40:44.102632 kubelet[1629]: I0209 18:40:44.102623 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:44.102739 kubelet[1629]: I0209 18:40:44.102728 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:44.103397 kubelet[1629]: I0209 18:40:44.103373 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:44.103572 kubelet[1629]: I0209 18:40:44.103550 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e92bba974bb30f7961a45c73100fedf9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e92bba974bb30f7961a45c73100fedf9\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:40:44.103669 kubelet[1629]: I0209 18:40:44.103658 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e92bba974bb30f7961a45c73100fedf9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e92bba974bb30f7961a45c73100fedf9\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:40:44.103757 kubelet[1629]: I0209 18:40:44.103747 1629 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:44.104006 kubelet[1629]: I0209 18:40:44.103987 1629 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:40:44.104329 kubelet[1629]: E0209 18:40:44.104311 1629 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Feb 9 18:40:44.292278 kubelet[1629]: E0209 18:40:44.292219 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:44.292941 env[1138]: time="2024-02-09T18:40:44.292886478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e92bba974bb30f7961a45c73100fedf9,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:44.295530 kubelet[1629]: E0209 18:40:44.295505 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:44.295913 env[1138]: time="2024-02-09T18:40:44.295881203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:44.308442 kubelet[1629]: E0209 18:40:44.308417 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:44.309124 env[1138]: time="2024-02-09T18:40:44.309093772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:44.403076 kubelet[1629]: E0209 18:40:44.402986 1629 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:44.505311 kubelet[1629]: I0209 18:40:44.505286 1629 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:40:44.505713 kubelet[1629]: E0209 18:40:44.505687 1629 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Feb 9 18:40:44.747355 kubelet[1629]: W0209 18:40:44.747282 1629 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:44.747355 kubelet[1629]: E0209 18:40:44.747340 1629 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:44.806743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632391596.mount: Deactivated successfully. Feb 9 18:40:44.812181 env[1138]: time="2024-02-09T18:40:44.812146272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.818273 env[1138]: time="2024-02-09T18:40:44.818218079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.819154 env[1138]: time="2024-02-09T18:40:44.819110594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.821355 env[1138]: time="2024-02-09T18:40:44.821330057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.824492 env[1138]: time="2024-02-09T18:40:44.824461002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.827765 env[1138]: time="2024-02-09T18:40:44.827737893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.831255 env[1138]: time="2024-02-09T18:40:44.831211910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.832029 env[1138]: time="2024-02-09T18:40:44.832003501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.832824 env[1138]: time="2024-02-09T18:40:44.832802014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.833616 env[1138]: time="2024-02-09T18:40:44.833590683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.834528 env[1138]: time="2024-02-09T18:40:44.834505528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.835367 env[1138]: time="2024-02-09T18:40:44.835320649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:44.860942 env[1138]: time="2024-02-09T18:40:44.860865676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:44.861081 env[1138]: time="2024-02-09T18:40:44.860928383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:44.861188 env[1138]: time="2024-02-09T18:40:44.861164768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:44.861490 env[1138]: time="2024-02-09T18:40:44.861452175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba078f5bf9dbacdf71201a5300990684438b5f26e68f7e4bebfd475a0f8be06a pid=1718 runtime=io.containerd.runc.v2 Feb 9 18:40:44.861490 env[1138]: time="2024-02-09T18:40:44.861459178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:44.861572 env[1138]: time="2024-02-09T18:40:44.861490872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:44.861572 env[1138]: time="2024-02-09T18:40:44.861506559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:44.861647 env[1138]: time="2024-02-09T18:40:44.861599641Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a899ef60442b2c90b5a70927dfce664a99cecb666cd52a05044dbf9f266e195 pid=1719 runtime=io.containerd.runc.v2 Feb 9 18:40:44.863173 env[1138]: time="2024-02-09T18:40:44.863086259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:44.863173 env[1138]: time="2024-02-09T18:40:44.863133520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:44.863173 env[1138]: time="2024-02-09T18:40:44.863144124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:44.863365 env[1138]: time="2024-02-09T18:40:44.863305916Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/268ad781940150ac8dedcbb03505d766b69cbfc004ba3cc2ae5c81bb9b2c6255 pid=1725 runtime=io.containerd.runc.v2 Feb 9 18:40:44.875910 systemd[1]: Started cri-containerd-0a899ef60442b2c90b5a70927dfce664a99cecb666cd52a05044dbf9f266e195.scope. Feb 9 18:40:44.877290 systemd[1]: Started cri-containerd-268ad781940150ac8dedcbb03505d766b69cbfc004ba3cc2ae5c81bb9b2c6255.scope. Feb 9 18:40:44.878309 systemd[1]: Started cri-containerd-ba078f5bf9dbacdf71201a5300990684438b5f26e68f7e4bebfd475a0f8be06a.scope. Feb 9 18:40:44.942502 env[1138]: time="2024-02-09T18:40:44.942464273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a899ef60442b2c90b5a70927dfce664a99cecb666cd52a05044dbf9f266e195\"" Feb 9 18:40:44.943542 kubelet[1629]: E0209 18:40:44.943516 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:44.945163 env[1138]: time="2024-02-09T18:40:44.945132974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e92bba974bb30f7961a45c73100fedf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"268ad781940150ac8dedcbb03505d766b69cbfc004ba3cc2ae5c81bb9b2c6255\"" Feb 9 18:40:44.947329 env[1138]: time="2024-02-09T18:40:44.947270400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba078f5bf9dbacdf71201a5300990684438b5f26e68f7e4bebfd475a0f8be06a\"" Feb 9 18:40:44.947734 kubelet[1629]: E0209 18:40:44.947698 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:44.947954 kubelet[1629]: E0209 18:40:44.947931 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:44.949892 env[1138]: time="2024-02-09T18:40:44.949859346Z" level=info msg="CreateContainer within sandbox \"0a899ef60442b2c90b5a70927dfce664a99cecb666cd52a05044dbf9f266e195\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:40:44.951438 env[1138]: time="2024-02-09T18:40:44.951411233Z" level=info msg="CreateContainer within sandbox \"ba078f5bf9dbacdf71201a5300990684438b5f26e68f7e4bebfd475a0f8be06a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:40:44.951619 env[1138]: time="2024-02-09T18:40:44.951581948Z" level=info msg="CreateContainer within sandbox \"268ad781940150ac8dedcbb03505d766b69cbfc004ba3cc2ae5c81bb9b2c6255\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:40:44.969189 env[1138]: time="2024-02-09T18:40:44.969153366Z" level=info msg="CreateContainer within sandbox \"0a899ef60442b2c90b5a70927dfce664a99cecb666cd52a05044dbf9f266e195\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"873ef526441c4bbd8699a33542607cb54b5d6805bac8872acb1f411db0f566bb\"" Feb 9 18:40:44.969871 env[1138]: time="2024-02-09T18:40:44.969781644Z" level=info msg="StartContainer for \"873ef526441c4bbd8699a33542607cb54b5d6805bac8872acb1f411db0f566bb\"" Feb 9 18:40:44.969968 env[1138]: time="2024-02-09T18:40:44.969839630Z" level=info msg="CreateContainer within sandbox \"ba078f5bf9dbacdf71201a5300990684438b5f26e68f7e4bebfd475a0f8be06a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"512172cec77beba016c9ccb7cc9bd0c7e28ac84b212d988c793f14d902012524\"" Feb 9 18:40:44.970342 env[1138]: time="2024-02-09T18:40:44.970319642Z" level=info msg="StartContainer for \"512172cec77beba016c9ccb7cc9bd0c7e28ac84b212d988c793f14d902012524\"" Feb 9 18:40:44.970495 env[1138]: time="2024-02-09T18:40:44.970457583Z" level=info msg="CreateContainer within sandbox \"268ad781940150ac8dedcbb03505d766b69cbfc004ba3cc2ae5c81bb9b2c6255\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e5dc91dccd26e729d4075a2a7d6d0aaaf99edb91289e4f922b8d81dbed8af3e\"" Feb 9 18:40:44.970747 env[1138]: time="2024-02-09T18:40:44.970722220Z" level=info msg="StartContainer for \"6e5dc91dccd26e729d4075a2a7d6d0aaaf99edb91289e4f922b8d81dbed8af3e\"" Feb 9 18:40:44.984204 systemd[1]: Started cri-containerd-512172cec77beba016c9ccb7cc9bd0c7e28ac84b212d988c793f14d902012524.scope. Feb 9 18:40:44.988436 systemd[1]: Started cri-containerd-6e5dc91dccd26e729d4075a2a7d6d0aaaf99edb91289e4f922b8d81dbed8af3e.scope. Feb 9 18:40:44.994998 systemd[1]: Started cri-containerd-873ef526441c4bbd8699a33542607cb54b5d6805bac8872acb1f411db0f566bb.scope. Feb 9 18:40:45.039089 kubelet[1629]: W0209 18:40:45.038933 1629 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:45.039089 kubelet[1629]: E0209 18:40:45.039006 1629 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:45.063310 env[1138]: time="2024-02-09T18:40:45.063183358Z" level=info msg="StartContainer for \"873ef526441c4bbd8699a33542607cb54b5d6805bac8872acb1f411db0f566bb\" returns successfully" Feb 9 18:40:45.094804 env[1138]: time="2024-02-09T18:40:45.094756387Z" level=info msg="StartContainer for \"6e5dc91dccd26e729d4075a2a7d6d0aaaf99edb91289e4f922b8d81dbed8af3e\" returns successfully" Feb 9 18:40:45.103806 env[1138]: time="2024-02-09T18:40:45.103763436Z" level=info msg="StartContainer for \"512172cec77beba016c9ccb7cc9bd0c7e28ac84b212d988c793f14d902012524\" returns successfully" Feb 9 18:40:45.123338 kubelet[1629]: W0209 18:40:45.123247 1629 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:45.123338 kubelet[1629]: E0209 18:40:45.123309 1629 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:45.143814 kubelet[1629]: W0209 18:40:45.143708 1629 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:45.143814 kubelet[1629]: E0209 18:40:45.143787 1629 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:45.204454 kubelet[1629]: E0209 18:40:45.204412 1629 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.114:6443: connect: connection refused Feb 9 18:40:45.307900 kubelet[1629]: I0209 18:40:45.307533 1629 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:40:45.880966 kubelet[1629]: E0209 18:40:45.880930 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:45.883101 kubelet[1629]: E0209 18:40:45.883046 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:45.884840 kubelet[1629]: E0209 18:40:45.884821 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:46.887776 kubelet[1629]: E0209 18:40:46.887283 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:46.887776 kubelet[1629]: E0209 18:40:46.887357 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:46.887776 kubelet[1629]: E0209 18:40:46.887737 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:47.887840 kubelet[1629]: E0209 18:40:47.887797 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:48.140543 kubelet[1629]: I0209 18:40:48.140433 1629 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:40:48.794562 kubelet[1629]: I0209 18:40:48.794523 1629 apiserver.go:52] "Watching apiserver" Feb 9 18:40:48.801458 kubelet[1629]: I0209 18:40:48.801428 1629 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:40:48.831616 kubelet[1629]: I0209 18:40:48.831576 1629 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:40:49.574498 kubelet[1629]: E0209 18:40:49.574470 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:49.889695 kubelet[1629]: E0209 18:40:49.889601 1629 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:51.169997 systemd[1]: Reloading. Feb 9 18:40:51.228337 /usr/lib/systemd/system-generators/torcx-generator[1962]: time="2024-02-09T18:40:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:40:51.228437 /usr/lib/systemd/system-generators/torcx-generator[1962]: time="2024-02-09T18:40:51Z" level=info msg="torcx already run" Feb 9 18:40:51.293079 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:40:51.293099 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:40:51.312596 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:40:51.414839 systemd[1]: Stopping kubelet.service... Feb 9 18:40:51.434832 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:40:51.435176 systemd[1]: Stopped kubelet.service. Feb 9 18:40:51.435254 systemd[1]: kubelet.service: Consumed 1.313s CPU time. Feb 9 18:40:51.437792 systemd[1]: Started kubelet.service. Feb 9 18:40:51.491729 kubelet[1999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:40:51.491729 kubelet[1999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:40:51.492071 kubelet[1999]: I0209 18:40:51.491823 1999 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:40:51.493621 kubelet[1999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:40:51.493621 kubelet[1999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:40:51.497508 kubelet[1999]: I0209 18:40:51.497439 1999 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:40:51.497508 kubelet[1999]: I0209 18:40:51.497474 1999 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:40:51.497725 kubelet[1999]: I0209 18:40:51.497707 1999 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:40:51.499166 kubelet[1999]: I0209 18:40:51.499137 1999 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:40:51.500126 kubelet[1999]: I0209 18:40:51.500092 1999 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:40:51.502308 kubelet[1999]: W0209 18:40:51.502280 1999 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:40:51.508169 kubelet[1999]: I0209 18:40:51.508114 1999 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:40:51.508496 kubelet[1999]: I0209 18:40:51.508471 1999 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:40:51.508605 kubelet[1999]: I0209 18:40:51.508583 1999 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:40:51.508688 kubelet[1999]: I0209 18:40:51.508619 1999 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:40:51.508688 kubelet[1999]: I0209 18:40:51.508638 1999 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:40:51.508688 kubelet[1999]: I0209 18:40:51.508669 1999 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:40:51.516672 kubelet[1999]: I0209 18:40:51.516637 1999 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:40:51.516672 kubelet[1999]: I0209 18:40:51.516668 1999 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:40:51.519258 kubelet[1999]: I0209 18:40:51.517632 1999 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:40:51.519258 kubelet[1999]: I0209 18:40:51.517654 1999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:40:51.524214 kubelet[1999]: I0209 18:40:51.524191 1999 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:40:51.530691 kubelet[1999]: I0209 18:40:51.530657 1999 server.go:1186] "Started kubelet" Feb 9 18:40:51.531068 kubelet[1999]: I0209 18:40:51.531046 1999 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:40:51.534400 kubelet[1999]: I0209 18:40:51.534365 1999 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:40:51.535139 kubelet[1999]: E0209 18:40:51.535113 1999 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:40:51.535139 kubelet[1999]: E0209 18:40:51.535139 1999 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:40:51.535953 kubelet[1999]: I0209 18:40:51.535934 1999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:40:51.536793 kubelet[1999]: I0209 18:40:51.536768 1999 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:40:51.539773 kubelet[1999]: E0209 18:40:51.537177 1999 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:40:51.539773 kubelet[1999]: I0209 18:40:51.537666 1999 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:40:51.591350 kubelet[1999]: I0209 18:40:51.591325 1999 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:40:51.605983 kubelet[1999]: I0209 18:40:51.605957 1999 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:40:51.605983 kubelet[1999]: I0209 18:40:51.605977 1999 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:40:51.606141 kubelet[1999]: I0209 18:40:51.605996 1999 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:40:51.606169 kubelet[1999]: I0209 18:40:51.606152 1999 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:40:51.606169 kubelet[1999]: I0209 18:40:51.606166 1999 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:40:51.606212 kubelet[1999]: I0209 18:40:51.606173 1999 policy_none.go:49] "None policy: Start" Feb 9 18:40:51.606558 kubelet[1999]: I0209 18:40:51.606513 1999 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:40:51.606735 kubelet[1999]: I0209 18:40:51.606713 1999 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:40:51.606942 kubelet[1999]: I0209 18:40:51.606865 1999 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:40:51.607032 kubelet[1999]: I0209 18:40:51.607019 1999 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:40:51.609396 kubelet[1999]: I0209 18:40:51.609336 1999 state_mem.go:75] "Updated machine memory state" Feb 9 18:40:51.610094 kubelet[1999]: I0209 18:40:51.610072 1999 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:40:51.610179 kubelet[1999]: E0209 18:40:51.610155 1999 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:40:51.615026 kubelet[1999]: I0209 18:40:51.614820 1999 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:40:51.615152 kubelet[1999]: I0209 18:40:51.615045 1999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:40:51.640850 kubelet[1999]: I0209 18:40:51.640814 1999 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:40:51.648174 kubelet[1999]: I0209 18:40:51.647866 1999 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 18:40:51.648174 kubelet[1999]: I0209 18:40:51.647960 1999 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:40:51.711839 kubelet[1999]: I0209 18:40:51.710710 1999 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:51.711839 kubelet[1999]: I0209 18:40:51.711027 1999 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:51.711839 kubelet[1999]: I0209 18:40:51.711150 1999 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:51.720656 kubelet[1999]: E0209 18:40:51.720628 1999 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:51.738298 kubelet[1999]: I0209 18:40:51.738265 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e92bba974bb30f7961a45c73100fedf9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e92bba974bb30f7961a45c73100fedf9\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:40:51.738494 kubelet[1999]: I0209 18:40:51.738480 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e92bba974bb30f7961a45c73100fedf9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e92bba974bb30f7961a45c73100fedf9\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:40:51.738596 kubelet[1999]: I0209 18:40:51.738585 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e92bba974bb30f7961a45c73100fedf9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e92bba974bb30f7961a45c73100fedf9\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:40:51.738710 kubelet[1999]: I0209 18:40:51.738698 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:51.738813 kubelet[1999]: I0209 18:40:51.738801 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:51.738922 kubelet[1999]: I0209 18:40:51.738910 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:40:51.739029 kubelet[1999]: I0209 18:40:51.739018 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:51.739126 kubelet[1999]: I0209 18:40:51.739114 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:51.739220 kubelet[1999]: I0209 18:40:51.739209 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:51.921949 kubelet[1999]: E0209 18:40:51.921894 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:52.016853 kubelet[1999]: E0209 18:40:52.016821 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:52.022290 kubelet[1999]: E0209 18:40:52.022262 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:52.518724 kubelet[1999]: I0209 18:40:52.518668 1999 apiserver.go:52] "Watching apiserver" Feb 9 18:40:52.738597 kubelet[1999]: I0209 18:40:52.738559 1999 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:40:52.745717 kubelet[1999]: I0209 18:40:52.745689 1999 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:40:52.840032 sudo[1232]: pam_unix(sudo:session): session closed for user root Feb 9 18:40:52.841797 sshd[1227]: pam_unix(sshd:session): session closed for user core Feb 9 18:40:52.844193 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:52554.service: Deactivated successfully. Feb 9 18:40:52.845077 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:40:52.845288 systemd[1]: session-5.scope: Consumed 5.502s CPU time. Feb 9 18:40:52.845696 systemd-logind[1127]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:40:52.846449 systemd-logind[1127]: Removed session 5. Feb 9 18:40:53.132934 kubelet[1999]: E0209 18:40:53.132839 1999 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 18:40:53.133293 kubelet[1999]: E0209 18:40:53.133135 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:53.322088 kubelet[1999]: E0209 18:40:53.322057 1999 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 18:40:53.322786 kubelet[1999]: E0209 18:40:53.322771 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:53.522332 kubelet[1999]: E0209 18:40:53.522300 1999 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 18:40:53.522720 kubelet[1999]: E0209 18:40:53.522709 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:53.618497 kubelet[1999]: E0209 18:40:53.618454 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:53.618618 kubelet[1999]: E0209 18:40:53.618579 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:53.618910 kubelet[1999]: E0209 18:40:53.618874 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:54.125954 kubelet[1999]: I0209 18:40:54.125579 1999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.125534707 pod.CreationTimestamp="2024-02-09 18:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:54.125094265 +0000 UTC m=+2.684117828" watchObservedRunningTime="2024-02-09 18:40:54.125534707 +0000 UTC m=+2.684558230" Feb 9 18:40:54.125954 kubelet[1999]: I0209 18:40:54.125654 1999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.125639157 pod.CreationTimestamp="2024-02-09 18:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:53.729356038 +0000 UTC m=+2.288379561" watchObservedRunningTime="2024-02-09 18:40:54.125639157 +0000 UTC m=+2.684662680" Feb 9 18:40:54.619361 kubelet[1999]: E0209 18:40:54.619330 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:54.922988 kubelet[1999]: I0209 18:40:54.922742 1999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.922707152 pod.CreationTimestamp="2024-02-09 18:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:54.922704512 +0000 UTC m=+3.481728035" watchObservedRunningTime="2024-02-09 18:40:54.922707152 +0000 UTC m=+3.481730675" Feb 9 18:40:57.876643 kubelet[1999]: E0209 18:40:57.876593 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:58.017919 kubelet[1999]: E0209 18:40:58.017882 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:58.624139 kubelet[1999]: E0209 18:40:58.624111 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:58.624965 kubelet[1999]: E0209 18:40:58.624942 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:03.845897 kubelet[1999]: E0209 18:41:03.845857 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:04.289300 kubelet[1999]: I0209 18:41:04.289263 1999 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:41:04.289617 env[1138]: time="2024-02-09T18:41:04.289572583Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:41:04.289860 kubelet[1999]: I0209 18:41:04.289749 1999 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:41:05.083135 kubelet[1999]: I0209 18:41:05.083099 1999 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:41:05.089051 systemd[1]: Created slice kubepods-besteffort-pod13388656_517d_4adb_8365_72de0b785d54.slice. Feb 9 18:41:05.091443 kubelet[1999]: I0209 18:41:05.091408 1999 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:41:05.102005 systemd[1]: Created slice kubepods-burstable-podfe49569e_1619_42e1_8d59_3b0029db50bb.slice. Feb 9 18:41:05.124847 kubelet[1999]: I0209 18:41:05.124813 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13388656-517d-4adb-8365-72de0b785d54-lib-modules\") pod \"kube-proxy-2746w\" (UID: \"13388656-517d-4adb-8365-72de0b785d54\") " pod="kube-system/kube-proxy-2746w" Feb 9 18:41:05.124847 kubelet[1999]: I0209 18:41:05.124854 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2mtj\" (UniqueName: \"kubernetes.io/projected/13388656-517d-4adb-8365-72de0b785d54-kube-api-access-c2mtj\") pod \"kube-proxy-2746w\" (UID: \"13388656-517d-4adb-8365-72de0b785d54\") " pod="kube-system/kube-proxy-2746w" Feb 9 18:41:05.125020 kubelet[1999]: I0209 18:41:05.124876 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/fe49569e-1619-42e1-8d59-3b0029db50bb-cni\") pod \"kube-flannel-ds-6lv47\" (UID: \"fe49569e-1619-42e1-8d59-3b0029db50bb\") " pod="kube-flannel/kube-flannel-ds-6lv47" Feb 9 18:41:05.125020 kubelet[1999]: I0209 18:41:05.124895 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe49569e-1619-42e1-8d59-3b0029db50bb-xtables-lock\") pod \"kube-flannel-ds-6lv47\" (UID: \"fe49569e-1619-42e1-8d59-3b0029db50bb\") " pod="kube-flannel/kube-flannel-ds-6lv47" Feb 9 18:41:05.125273 kubelet[1999]: I0209 18:41:05.125246 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5cc\" (UniqueName: \"kubernetes.io/projected/fe49569e-1619-42e1-8d59-3b0029db50bb-kube-api-access-lk5cc\") pod \"kube-flannel-ds-6lv47\" (UID: \"fe49569e-1619-42e1-8d59-3b0029db50bb\") " pod="kube-flannel/kube-flannel-ds-6lv47" Feb 9 18:41:05.125341 kubelet[1999]: I0209 18:41:05.125289 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/fe49569e-1619-42e1-8d59-3b0029db50bb-flannel-cfg\") pod \"kube-flannel-ds-6lv47\" (UID: \"fe49569e-1619-42e1-8d59-3b0029db50bb\") " pod="kube-flannel/kube-flannel-ds-6lv47" Feb 9 18:41:05.125341 kubelet[1999]: I0209 18:41:05.125318 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13388656-517d-4adb-8365-72de0b785d54-kube-proxy\") pod \"kube-proxy-2746w\" (UID: \"13388656-517d-4adb-8365-72de0b785d54\") " pod="kube-system/kube-proxy-2746w" Feb 9 18:41:05.125341 kubelet[1999]: I0209 18:41:05.125339 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13388656-517d-4adb-8365-72de0b785d54-xtables-lock\") pod \"kube-proxy-2746w\" (UID: \"13388656-517d-4adb-8365-72de0b785d54\") " pod="kube-system/kube-proxy-2746w" Feb 9 18:41:05.125416 kubelet[1999]: I0209 18:41:05.125364 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fe49569e-1619-42e1-8d59-3b0029db50bb-run\") pod \"kube-flannel-ds-6lv47\" (UID: \"fe49569e-1619-42e1-8d59-3b0029db50bb\") " pod="kube-flannel/kube-flannel-ds-6lv47" Feb 9 18:41:05.125416 kubelet[1999]: I0209 18:41:05.125399 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/fe49569e-1619-42e1-8d59-3b0029db50bb-cni-plugin\") pod \"kube-flannel-ds-6lv47\" (UID: \"fe49569e-1619-42e1-8d59-3b0029db50bb\") " pod="kube-flannel/kube-flannel-ds-6lv47" Feb 9 18:41:05.397629 kubelet[1999]: E0209 18:41:05.397515 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:05.398531 env[1138]: time="2024-02-09T18:41:05.398473599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2746w,Uid:13388656-517d-4adb-8365-72de0b785d54,Namespace:kube-system,Attempt:0,}" Feb 9 18:41:05.405004 kubelet[1999]: E0209 18:41:05.404963 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:05.405585 env[1138]: time="2024-02-09T18:41:05.405370847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6lv47,Uid:fe49569e-1619-42e1-8d59-3b0029db50bb,Namespace:kube-flannel,Attempt:0,}" Feb 9 18:41:05.415117 env[1138]: time="2024-02-09T18:41:05.415054643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:41:05.415117 env[1138]: time="2024-02-09T18:41:05.415094005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:41:05.415117 env[1138]: time="2024-02-09T18:41:05.415104286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:41:05.415489 env[1138]: time="2024-02-09T18:41:05.415438384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/34b0c3953b0a3216da60a4ebd2d2e599677a0efffe75288082ef2f3f4d3baea2 pid=2094 runtime=io.containerd.runc.v2 Feb 9 18:41:05.430111 env[1138]: time="2024-02-09T18:41:05.430032922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:41:05.430111 env[1138]: time="2024-02-09T18:41:05.430073164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:41:05.430305 env[1138]: time="2024-02-09T18:41:05.430083764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:41:05.430305 env[1138]: time="2024-02-09T18:41:05.430217571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2 pid=2118 runtime=io.containerd.runc.v2 Feb 9 18:41:05.432007 systemd[1]: Started cri-containerd-34b0c3953b0a3216da60a4ebd2d2e599677a0efffe75288082ef2f3f4d3baea2.scope. Feb 9 18:41:05.441151 systemd[1]: Started cri-containerd-86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2.scope. Feb 9 18:41:05.486736 env[1138]: time="2024-02-09T18:41:05.486685261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2746w,Uid:13388656-517d-4adb-8365-72de0b785d54,Namespace:kube-system,Attempt:0,} returns sandbox id \"34b0c3953b0a3216da60a4ebd2d2e599677a0efffe75288082ef2f3f4d3baea2\"" Feb 9 18:41:05.487936 kubelet[1999]: E0209 18:41:05.487435 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:05.491188 env[1138]: time="2024-02-09T18:41:05.491145819Z" level=info msg="CreateContainer within sandbox \"34b0c3953b0a3216da60a4ebd2d2e599677a0efffe75288082ef2f3f4d3baea2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:41:05.493362 env[1138]: time="2024-02-09T18:41:05.493324735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6lv47,Uid:fe49569e-1619-42e1-8d59-3b0029db50bb,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2\"" Feb 9 18:41:05.494331 kubelet[1999]: E0209 18:41:05.493959 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:05.495206 env[1138]: time="2024-02-09T18:41:05.495177114Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\"" Feb 9 18:41:05.507036 env[1138]: time="2024-02-09T18:41:05.506998784Z" level=info msg="CreateContainer within sandbox \"34b0c3953b0a3216da60a4ebd2d2e599677a0efffe75288082ef2f3f4d3baea2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1778fbd4aaf009a833c658ffc6978b8865b31c3ed2a00731f8368bd9fec8554\"" Feb 9 18:41:05.508279 env[1138]: time="2024-02-09T18:41:05.507733063Z" level=info msg="StartContainer for \"d1778fbd4aaf009a833c658ffc6978b8865b31c3ed2a00731f8368bd9fec8554\"" Feb 9 18:41:05.526342 systemd[1]: Started cri-containerd-d1778fbd4aaf009a833c658ffc6978b8865b31c3ed2a00731f8368bd9fec8554.scope. Feb 9 18:41:05.574392 env[1138]: time="2024-02-09T18:41:05.574350974Z" level=info msg="StartContainer for \"d1778fbd4aaf009a833c658ffc6978b8865b31c3ed2a00731f8368bd9fec8554\" returns successfully" Feb 9 18:41:05.636296 kubelet[1999]: E0209 18:41:05.633544 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:06.013529 update_engine[1130]: I0209 18:41:06.013460 1130 update_attempter.cc:509] Updating boot flags... Feb 9 18:41:06.583701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125250725.mount: Deactivated successfully. Feb 9 18:41:06.622157 env[1138]: time="2024-02-09T18:41:06.622114041Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:41:06.623693 env[1138]: time="2024-02-09T18:41:06.623658159Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:41:06.625879 env[1138]: time="2024-02-09T18:41:06.625849470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:41:06.627485 env[1138]: time="2024-02-09T18:41:06.627448431Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:41:06.627965 env[1138]: time="2024-02-09T18:41:06.627933776Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9\"" Feb 9 18:41:06.631129 env[1138]: time="2024-02-09T18:41:06.631083855Z" level=info msg="CreateContainer within sandbox \"86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 18:41:06.634628 kubelet[1999]: E0209 18:41:06.634578 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:06.639486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288655021.mount: Deactivated successfully. Feb 9 18:41:06.642775 env[1138]: time="2024-02-09T18:41:06.642740166Z" level=info msg="CreateContainer within sandbox \"86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0f88673df8196bc809d7e0959ebbac3f81ab75f6fb27465334aa0d2442cdb02b\"" Feb 9 18:41:06.643160 env[1138]: time="2024-02-09T18:41:06.643135506Z" level=info msg="StartContainer for \"0f88673df8196bc809d7e0959ebbac3f81ab75f6fb27465334aa0d2442cdb02b\"" Feb 9 18:41:06.656264 systemd[1]: Started cri-containerd-0f88673df8196bc809d7e0959ebbac3f81ab75f6fb27465334aa0d2442cdb02b.scope. Feb 9 18:41:06.692557 env[1138]: time="2024-02-09T18:41:06.692516929Z" level=info msg="StartContainer for \"0f88673df8196bc809d7e0959ebbac3f81ab75f6fb27465334aa0d2442cdb02b\" returns successfully" Feb 9 18:41:06.696492 systemd[1]: cri-containerd-0f88673df8196bc809d7e0959ebbac3f81ab75f6fb27465334aa0d2442cdb02b.scope: Deactivated successfully. Feb 9 18:41:06.727543 env[1138]: time="2024-02-09T18:41:06.727497902Z" level=info msg="shim disconnected" id=0f88673df8196bc809d7e0959ebbac3f81ab75f6fb27465334aa0d2442cdb02b Feb 9 18:41:06.727794 env[1138]: time="2024-02-09T18:41:06.727765315Z" level=warning msg="cleaning up after shim disconnected" id=0f88673df8196bc809d7e0959ebbac3f81ab75f6fb27465334aa0d2442cdb02b namespace=k8s.io Feb 9 18:41:06.727863 env[1138]: time="2024-02-09T18:41:06.727849960Z" level=info msg="cleaning up dead shim" Feb 9 18:41:06.734301 env[1138]: time="2024-02-09T18:41:06.734266245Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:41:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2367 runtime=io.containerd.runc.v2\n" Feb 9 18:41:07.637569 kubelet[1999]: E0209 18:41:07.637527 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:07.645801 env[1138]: time="2024-02-09T18:41:07.643312537Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\"" Feb 9 18:41:07.648475 kubelet[1999]: I0209 18:41:07.648439 1999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2746w" podStartSLOduration=2.648402182 pod.CreationTimestamp="2024-02-09 18:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:41:05.642046462 +0000 UTC m=+14.201069985" watchObservedRunningTime="2024-02-09 18:41:07.648402182 +0000 UTC m=+16.207425705" Feb 9 18:41:08.685218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341698210.mount: Deactivated successfully. Feb 9 18:41:09.276274 env[1138]: time="2024-02-09T18:41:09.276214309Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:41:09.277491 env[1138]: time="2024-02-09T18:41:09.277462124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:41:09.278999 env[1138]: time="2024-02-09T18:41:09.278957549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:41:09.281205 env[1138]: time="2024-02-09T18:41:09.281171286Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:41:09.281908 env[1138]: time="2024-02-09T18:41:09.281868877Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459\"" Feb 9 18:41:09.284933 env[1138]: time="2024-02-09T18:41:09.284894329Z" level=info msg="CreateContainer within sandbox \"86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 18:41:09.294527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182479004.mount: Deactivated successfully. Feb 9 18:41:09.298439 env[1138]: time="2024-02-09T18:41:09.298403320Z" level=info msg="CreateContainer within sandbox \"86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e4e02bcb9c2b73561b2cc0e68a6390c3bec2f63ed1e1577885b31ae97a0eed4f\"" Feb 9 18:41:09.298740 env[1138]: time="2024-02-09T18:41:09.298716574Z" level=info msg="StartContainer for \"e4e02bcb9c2b73561b2cc0e68a6390c3bec2f63ed1e1577885b31ae97a0eed4f\"" Feb 9 18:41:09.313603 systemd[1]: Started cri-containerd-e4e02bcb9c2b73561b2cc0e68a6390c3bec2f63ed1e1577885b31ae97a0eed4f.scope. Feb 9 18:41:09.356867 env[1138]: time="2024-02-09T18:41:09.356822077Z" level=info msg="StartContainer for \"e4e02bcb9c2b73561b2cc0e68a6390c3bec2f63ed1e1577885b31ae97a0eed4f\" returns successfully" Feb 9 18:41:09.357081 systemd[1]: cri-containerd-e4e02bcb9c2b73561b2cc0e68a6390c3bec2f63ed1e1577885b31ae97a0eed4f.scope: Deactivated successfully. Feb 9 18:41:09.396160 kubelet[1999]: I0209 18:41:09.395316 1999 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:41:09.418600 kubelet[1999]: I0209 18:41:09.418470 1999 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:41:09.418600 kubelet[1999]: I0209 18:41:09.418600 1999 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:41:09.425919 systemd[1]: Created slice kubepods-burstable-podf425f34d_b892_401b_84bb_86e16356baf8.slice. Feb 9 18:41:09.434986 systemd[1]: Created slice kubepods-burstable-pod595d89e0_94a6_46d3_bd73_27e07863e574.slice. Feb 9 18:41:09.449214 env[1138]: time="2024-02-09T18:41:09.449157237Z" level=info msg="shim disconnected" id=e4e02bcb9c2b73561b2cc0e68a6390c3bec2f63ed1e1577885b31ae97a0eed4f Feb 9 18:41:09.449214 env[1138]: time="2024-02-09T18:41:09.449208400Z" level=warning msg="cleaning up after shim disconnected" id=e4e02bcb9c2b73561b2cc0e68a6390c3bec2f63ed1e1577885b31ae97a0eed4f namespace=k8s.io Feb 9 18:41:09.449359 env[1138]: time="2024-02-09T18:41:09.449218280Z" level=info msg="cleaning up dead shim" Feb 9 18:41:09.455982 kubelet[1999]: I0209 18:41:09.455949 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f425f34d-b892-401b-84bb-86e16356baf8-config-volume\") pod \"coredns-787d4945fb-tb2sp\" (UID: \"f425f34d-b892-401b-84bb-86e16356baf8\") " pod="kube-system/coredns-787d4945fb-tb2sp" Feb 9 18:41:09.456078 kubelet[1999]: I0209 18:41:09.455993 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/595d89e0-94a6-46d3-bd73-27e07863e574-config-volume\") pod \"coredns-787d4945fb-8rs8w\" (UID: \"595d89e0-94a6-46d3-bd73-27e07863e574\") " pod="kube-system/coredns-787d4945fb-8rs8w" Feb 9 18:41:09.456078 kubelet[1999]: I0209 18:41:09.456021 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4mz\" (UniqueName: \"kubernetes.io/projected/595d89e0-94a6-46d3-bd73-27e07863e574-kube-api-access-hw4mz\") pod \"coredns-787d4945fb-8rs8w\" (UID: \"595d89e0-94a6-46d3-bd73-27e07863e574\") " pod="kube-system/coredns-787d4945fb-8rs8w" Feb 9 18:41:09.456078 kubelet[1999]: I0209 18:41:09.456043 1999 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqsk5\" (UniqueName: \"kubernetes.io/projected/f425f34d-b892-401b-84bb-86e16356baf8-kube-api-access-fqsk5\") pod \"coredns-787d4945fb-tb2sp\" (UID: \"f425f34d-b892-401b-84bb-86e16356baf8\") " pod="kube-system/coredns-787d4945fb-tb2sp" Feb 9 18:41:09.456419 env[1138]: time="2024-02-09T18:41:09.456387114Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:41:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2422 runtime=io.containerd.runc.v2\n" Feb 9 18:41:09.642558 kubelet[1999]: E0209 18:41:09.641500 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:09.647210 env[1138]: time="2024-02-09T18:41:09.647062778Z" level=info msg="CreateContainer within sandbox \"86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 18:41:09.661257 env[1138]: time="2024-02-09T18:41:09.661206277Z" level=info msg="CreateContainer within sandbox \"86abd57488f3310e1b0136dcf11b2ccff545b6b140403716bdea15bc0c7373e2\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a1b0ae9fecf3949e932ad83966e508fdd835fbaedaa7300533e16464b7018ad1\"" Feb 9 18:41:09.662906 env[1138]: time="2024-02-09T18:41:09.661753661Z" level=info msg="StartContainer for \"a1b0ae9fecf3949e932ad83966e508fdd835fbaedaa7300533e16464b7018ad1\"" Feb 9 18:41:09.678088 systemd[1]: Started cri-containerd-a1b0ae9fecf3949e932ad83966e508fdd835fbaedaa7300533e16464b7018ad1.scope. Feb 9 18:41:09.712867 env[1138]: time="2024-02-09T18:41:09.712297673Z" level=info msg="StartContainer for \"a1b0ae9fecf3949e932ad83966e508fdd835fbaedaa7300533e16464b7018ad1\" returns successfully" Feb 9 18:41:09.730463 kubelet[1999]: E0209 18:41:09.730088 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:09.730774 env[1138]: time="2024-02-09T18:41:09.730743560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-tb2sp,Uid:f425f34d-b892-401b-84bb-86e16356baf8,Namespace:kube-system,Attempt:0,}" Feb 9 18:41:09.740708 kubelet[1999]: E0209 18:41:09.740356 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:09.740862 env[1138]: time="2024-02-09T18:41:09.740661834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8rs8w,Uid:595d89e0-94a6-46d3-bd73-27e07863e574,Namespace:kube-system,Attempt:0,}" Feb 9 18:41:09.830103 env[1138]: time="2024-02-09T18:41:09.830038426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-tb2sp,Uid:f425f34d-b892-401b-84bb-86e16356baf8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75ad23a5c3e8bac12104582437c5af5d5266d0520e5323e0b1b7dddcf7d2f34f\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:41:09.830453 kubelet[1999]: E0209 18:41:09.830425 1999 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75ad23a5c3e8bac12104582437c5af5d5266d0520e5323e0b1b7dddcf7d2f34f\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:41:09.830523 kubelet[1999]: E0209 18:41:09.830510 1999 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75ad23a5c3e8bac12104582437c5af5d5266d0520e5323e0b1b7dddcf7d2f34f\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-tb2sp" Feb 9 18:41:09.830550 kubelet[1999]: E0209 18:41:09.830533 1999 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75ad23a5c3e8bac12104582437c5af5d5266d0520e5323e0b1b7dddcf7d2f34f\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-tb2sp" Feb 9 18:41:09.830623 kubelet[1999]: E0209 18:41:09.830606 1999 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-tb2sp_kube-system(f425f34d-b892-401b-84bb-86e16356baf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-tb2sp_kube-system(f425f34d-b892-401b-84bb-86e16356baf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75ad23a5c3e8bac12104582437c5af5d5266d0520e5323e0b1b7dddcf7d2f34f\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-tb2sp" podUID=f425f34d-b892-401b-84bb-86e16356baf8 Feb 9 18:41:09.832366 env[1138]: time="2024-02-09T18:41:09.831341603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8rs8w,Uid:595d89e0-94a6-46d3-bd73-27e07863e574,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"095b7b4542ce03e18f0be7c32274b693c56ceedccd4b289a17c4365ba2b44d10\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:41:09.832478 kubelet[1999]: E0209 18:41:09.832213 1999 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095b7b4542ce03e18f0be7c32274b693c56ceedccd4b289a17c4365ba2b44d10\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 18:41:09.832478 kubelet[1999]: E0209 18:41:09.832269 1999 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095b7b4542ce03e18f0be7c32274b693c56ceedccd4b289a17c4365ba2b44d10\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-8rs8w" Feb 9 18:41:09.832478 kubelet[1999]: E0209 18:41:09.832299 1999 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095b7b4542ce03e18f0be7c32274b693c56ceedccd4b289a17c4365ba2b44d10\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-8rs8w" Feb 9 18:41:09.832478 kubelet[1999]: E0209 18:41:09.832341 1999 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-8rs8w_kube-system(595d89e0-94a6-46d3-bd73-27e07863e574)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-8rs8w_kube-system(595d89e0-94a6-46d3-bd73-27e07863e574)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"095b7b4542ce03e18f0be7c32274b693c56ceedccd4b289a17c4365ba2b44d10\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-8rs8w" podUID=595d89e0-94a6-46d3-bd73-27e07863e574 Feb 9 18:41:10.644534 kubelet[1999]: E0209 18:41:10.644512 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:10.655737 kubelet[1999]: I0209 18:41:10.655695 1999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6lv47" podStartSLOduration=-9.223372031199118e+09 pod.CreationTimestamp="2024-02-09 18:41:05 +0000 UTC" firstStartedPulling="2024-02-09 18:41:05.494692928 +0000 UTC m=+14.053716451" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:41:10.65445558 +0000 UTC m=+19.213479103" watchObservedRunningTime="2024-02-09 18:41:10.65565727 +0000 UTC m=+19.214680793" Feb 9 18:41:11.149886 systemd-networkd[1038]: flannel.1: Link UP Feb 9 18:41:11.149892 systemd-networkd[1038]: flannel.1: Gained carrier Feb 9 18:41:11.646289 kubelet[1999]: E0209 18:41:11.646261 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:12.995353 systemd-networkd[1038]: flannel.1: Gained IPv6LL Feb 9 18:41:21.611315 kubelet[1999]: E0209 18:41:21.611278 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:21.612325 env[1138]: time="2024-02-09T18:41:21.612274446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-tb2sp,Uid:f425f34d-b892-401b-84bb-86e16356baf8,Namespace:kube-system,Attempt:0,}" Feb 9 18:41:21.644508 systemd-networkd[1038]: cni0: Link UP Feb 9 18:41:21.644518 systemd-networkd[1038]: cni0: Gained carrier Feb 9 18:41:21.645731 systemd-networkd[1038]: cni0: Lost carrier Feb 9 18:41:21.649609 systemd-networkd[1038]: veth7faf1251: Link UP Feb 9 18:41:21.651393 kernel: cni0: port 1(veth7faf1251) entered blocking state Feb 9 18:41:21.651498 kernel: cni0: port 1(veth7faf1251) entered disabled state Feb 9 18:41:21.653028 kernel: device veth7faf1251 entered promiscuous mode Feb 9 18:41:21.653079 kernel: cni0: port 1(veth7faf1251) entered blocking state Feb 9 18:41:21.653098 kernel: cni0: port 1(veth7faf1251) entered forwarding state Feb 9 18:41:21.656260 kernel: cni0: port 1(veth7faf1251) entered disabled state Feb 9 18:41:21.665715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7faf1251: link becomes ready Feb 9 18:41:21.665781 kernel: cni0: port 1(veth7faf1251) entered blocking state Feb 9 18:41:21.665801 kernel: cni0: port 1(veth7faf1251) entered forwarding state Feb 9 18:41:21.666237 systemd-networkd[1038]: veth7faf1251: Gained carrier Feb 9 18:41:21.666425 systemd-networkd[1038]: cni0: Gained carrier Feb 9 18:41:21.669455 env[1138]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a928), "name":"cbr0", "type":"bridge"} Feb 9 18:41:21.678975 env[1138]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T18:41:21.678918999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:41:21.679844 env[1138]: time="2024-02-09T18:41:21.678967280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:41:21.679844 env[1138]: time="2024-02-09T18:41:21.678985920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:41:21.679844 env[1138]: time="2024-02-09T18:41:21.679134924Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4bba148063833abc8891464147802f492c4f6ebc284a755313a653db28a1e7c pid=2681 runtime=io.containerd.runc.v2 Feb 9 18:41:21.700496 systemd[1]: Started cri-containerd-d4bba148063833abc8891464147802f492c4f6ebc284a755313a653db28a1e7c.scope. Feb 9 18:41:21.720792 systemd-resolved[1083]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:41:21.736510 env[1138]: time="2024-02-09T18:41:21.736467632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-tb2sp,Uid:f425f34d-b892-401b-84bb-86e16356baf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4bba148063833abc8891464147802f492c4f6ebc284a755313a653db28a1e7c\"" Feb 9 18:41:21.737549 kubelet[1999]: E0209 18:41:21.737240 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:21.741302 env[1138]: time="2024-02-09T18:41:21.741060953Z" level=info msg="CreateContainer within sandbox \"d4bba148063833abc8891464147802f492c4f6ebc284a755313a653db28a1e7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:41:21.749819 env[1138]: time="2024-02-09T18:41:21.749773902Z" level=info msg="CreateContainer within sandbox \"d4bba148063833abc8891464147802f492c4f6ebc284a755313a653db28a1e7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e78129a93d6885b7678101d42001594e255c9aa6fe2197340d97512838dbf7d9\"" Feb 9 18:41:21.750373 env[1138]: time="2024-02-09T18:41:21.750337037Z" level=info msg="StartContainer for \"e78129a93d6885b7678101d42001594e255c9aa6fe2197340d97512838dbf7d9\"" Feb 9 18:41:21.764009 systemd[1]: Started cri-containerd-e78129a93d6885b7678101d42001594e255c9aa6fe2197340d97512838dbf7d9.scope. Feb 9 18:41:21.810908 env[1138]: time="2024-02-09T18:41:21.810864989Z" level=info msg="StartContainer for \"e78129a93d6885b7678101d42001594e255c9aa6fe2197340d97512838dbf7d9\" returns successfully" Feb 9 18:41:22.611275 kubelet[1999]: E0209 18:41:22.611213 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:22.611676 env[1138]: time="2024-02-09T18:41:22.611642243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8rs8w,Uid:595d89e0-94a6-46d3-bd73-27e07863e574,Namespace:kube-system,Attempt:0,}" Feb 9 18:41:22.625077 systemd[1]: run-containerd-runc-k8s.io-d4bba148063833abc8891464147802f492c4f6ebc284a755313a653db28a1e7c-runc.XlJxRM.mount: Deactivated successfully. Feb 9 18:41:22.626751 systemd-networkd[1038]: vethe4c066bd: Link UP Feb 9 18:41:22.628394 kernel: cni0: port 2(vethe4c066bd) entered blocking state Feb 9 18:41:22.628468 kernel: cni0: port 2(vethe4c066bd) entered disabled state Feb 9 18:41:22.629515 kernel: device vethe4c066bd entered promiscuous mode Feb 9 18:41:22.629557 kernel: cni0: port 2(vethe4c066bd) entered blocking state Feb 9 18:41:22.629573 kernel: cni0: port 2(vethe4c066bd) entered forwarding state Feb 9 18:41:22.634912 systemd-networkd[1038]: vethe4c066bd: Gained carrier Feb 9 18:41:22.635246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe4c066bd: link becomes ready Feb 9 18:41:22.636545 env[1138]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001e928), "name":"cbr0", "type":"bridge"} Feb 9 18:41:22.645294 env[1138]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T18:41:22.645195494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:41:22.645294 env[1138]: time="2024-02-09T18:41:22.645257816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:41:22.645294 env[1138]: time="2024-02-09T18:41:22.645268696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:41:22.645596 env[1138]: time="2024-02-09T18:41:22.645557864Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66fadf5ae2cfc823daf178dac226f6d688daf1d774c4ad95c42412d1bb794057 pid=2810 runtime=io.containerd.runc.v2 Feb 9 18:41:22.657399 systemd[1]: Started cri-containerd-66fadf5ae2cfc823daf178dac226f6d688daf1d774c4ad95c42412d1bb794057.scope. Feb 9 18:41:22.665822 kubelet[1999]: E0209 18:41:22.664529 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:22.695611 kubelet[1999]: I0209 18:41:22.695580 1999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-tb2sp" podStartSLOduration=17.695545692 pod.CreationTimestamp="2024-02-09 18:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:41:22.681978187 +0000 UTC m=+31.241001750" watchObservedRunningTime="2024-02-09 18:41:22.695545692 +0000 UTC m=+31.254569175" Feb 9 18:41:22.697650 systemd-resolved[1083]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:41:22.726971 env[1138]: time="2024-02-09T18:41:22.726917888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8rs8w,Uid:595d89e0-94a6-46d3-bd73-27e07863e574,Namespace:kube-system,Attempt:0,} returns sandbox id \"66fadf5ae2cfc823daf178dac226f6d688daf1d774c4ad95c42412d1bb794057\"" Feb 9 18:41:22.727836 kubelet[1999]: E0209 18:41:22.727814 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:22.730072 env[1138]: time="2024-02-09T18:41:22.730028366Z" level=info msg="CreateContainer within sandbox \"66fadf5ae2cfc823daf178dac226f6d688daf1d774c4ad95c42412d1bb794057\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:41:22.742186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434433676.mount: Deactivated successfully. Feb 9 18:41:22.746326 env[1138]: time="2024-02-09T18:41:22.746285579Z" level=info msg="CreateContainer within sandbox \"66fadf5ae2cfc823daf178dac226f6d688daf1d774c4ad95c42412d1bb794057\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"757d0f214105be89516ef73b493d578d9ec315f21783255e7a631c3d70c6dd25\"" Feb 9 18:41:22.746860 env[1138]: time="2024-02-09T18:41:22.746831233Z" level=info msg="StartContainer for \"757d0f214105be89516ef73b493d578d9ec315f21783255e7a631c3d70c6dd25\"" Feb 9 18:41:22.761646 systemd[1]: Started cri-containerd-757d0f214105be89516ef73b493d578d9ec315f21783255e7a631c3d70c6dd25.scope. Feb 9 18:41:22.793109 env[1138]: time="2024-02-09T18:41:22.793069406Z" level=info msg="StartContainer for \"757d0f214105be89516ef73b493d578d9ec315f21783255e7a631c3d70c6dd25\" returns successfully" Feb 9 18:41:23.107454 systemd-networkd[1038]: cni0: Gained IPv6LL Feb 9 18:41:23.555401 systemd-networkd[1038]: veth7faf1251: Gained IPv6LL Feb 9 18:41:23.667554 kubelet[1999]: E0209 18:41:23.667520 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:23.669216 kubelet[1999]: E0209 18:41:23.669202 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:23.679809 kubelet[1999]: I0209 18:41:23.679766 1999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-8rs8w" podStartSLOduration=18.679736748 pod.CreationTimestamp="2024-02-09 18:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:41:23.675981376 +0000 UTC m=+32.235004899" watchObservedRunningTime="2024-02-09 18:41:23.679736748 +0000 UTC m=+32.238760271" Feb 9 18:41:24.067448 systemd-networkd[1038]: vethe4c066bd: Gained IPv6LL Feb 9 18:41:24.669673 kubelet[1999]: E0209 18:41:24.669645 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:24.670012 kubelet[1999]: E0209 18:41:24.669657 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:41:29.587423 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:41256.service. Feb 9 18:41:29.630920 sshd[2979]: Accepted publickey for core from 10.0.0.1 port 41256 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:29.632163 sshd[2979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:29.635217 systemd-logind[1127]: New session 6 of user core. Feb 9 18:41:29.636204 systemd[1]: Started session-6.scope. Feb 9 18:41:29.761902 sshd[2979]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:29.764365 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:41256.service: Deactivated successfully. Feb 9 18:41:29.765069 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:41:29.765800 systemd-logind[1127]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:41:29.766424 systemd-logind[1127]: Removed session 6. Feb 9 18:41:34.766577 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:59574.service. Feb 9 18:41:34.810388 sshd[3012]: Accepted publickey for core from 10.0.0.1 port 59574 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:34.811811 sshd[3012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:34.815433 systemd-logind[1127]: New session 7 of user core. Feb 9 18:41:34.816557 systemd[1]: Started session-7.scope. Feb 9 18:41:34.923535 sshd[3012]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:34.925939 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:59574.service: Deactivated successfully. Feb 9 18:41:34.926682 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:41:34.927193 systemd-logind[1127]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:41:34.927827 systemd-logind[1127]: Removed session 7. Feb 9 18:41:39.928439 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:59580.service. Feb 9 18:41:39.973559 sshd[3046]: Accepted publickey for core from 10.0.0.1 port 59580 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:39.974992 sshd[3046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:39.978682 systemd-logind[1127]: New session 8 of user core. Feb 9 18:41:39.979121 systemd[1]: Started session-8.scope. Feb 9 18:41:40.098964 sshd[3046]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:40.101735 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:59580.service: Deactivated successfully. Feb 9 18:41:40.102357 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:41:40.102968 systemd-logind[1127]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:41:40.104427 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:59594.service. Feb 9 18:41:40.105104 systemd-logind[1127]: Removed session 8. Feb 9 18:41:40.147311 sshd[3060]: Accepted publickey for core from 10.0.0.1 port 59594 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:40.148465 sshd[3060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:40.151745 systemd-logind[1127]: New session 9 of user core. Feb 9 18:41:40.152582 systemd[1]: Started session-9.scope. Feb 9 18:41:40.357723 sshd[3060]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:40.359824 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:59606.service. Feb 9 18:41:40.364448 systemd-logind[1127]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:41:40.364495 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:59594.service: Deactivated successfully. Feb 9 18:41:40.365201 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:41:40.371300 systemd-logind[1127]: Removed session 9. Feb 9 18:41:40.408784 sshd[3071]: Accepted publickey for core from 10.0.0.1 port 59606 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:40.409960 sshd[3071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:40.414244 systemd-logind[1127]: New session 10 of user core. Feb 9 18:41:40.415463 systemd[1]: Started session-10.scope. Feb 9 18:41:40.524634 sshd[3071]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:40.527347 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:59606.service: Deactivated successfully. Feb 9 18:41:40.528075 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:41:40.529029 systemd-logind[1127]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:41:40.529819 systemd-logind[1127]: Removed session 10. Feb 9 18:41:45.538670 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:52566.service. Feb 9 18:41:45.581499 sshd[3107]: Accepted publickey for core from 10.0.0.1 port 52566 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:45.582647 sshd[3107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:45.586237 systemd-logind[1127]: New session 11 of user core. Feb 9 18:41:45.586807 systemd[1]: Started session-11.scope. Feb 9 18:41:45.695003 sshd[3107]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:45.697483 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:52566.service: Deactivated successfully. Feb 9 18:41:45.698246 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:41:45.698793 systemd-logind[1127]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:41:45.699515 systemd-logind[1127]: Removed session 11. Feb 9 18:41:50.699695 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:52568.service. Feb 9 18:41:50.742064 sshd[3138]: Accepted publickey for core from 10.0.0.1 port 52568 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:50.743210 sshd[3138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:50.746345 systemd-logind[1127]: New session 12 of user core. Feb 9 18:41:50.747281 systemd[1]: Started session-12.scope. Feb 9 18:41:50.856280 sshd[3138]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:50.858577 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:52568.service: Deactivated successfully. Feb 9 18:41:50.859334 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:41:50.859874 systemd-logind[1127]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:41:50.860595 systemd-logind[1127]: Removed session 12. Feb 9 18:41:55.861307 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:54210.service. Feb 9 18:41:55.903761 sshd[3173]: Accepted publickey for core from 10.0.0.1 port 54210 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:41:55.905215 sshd[3173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:41:55.908245 systemd-logind[1127]: New session 13 of user core. Feb 9 18:41:55.909070 systemd[1]: Started session-13.scope. Feb 9 18:41:56.018159 sshd[3173]: pam_unix(sshd:session): session closed for user core Feb 9 18:41:56.020703 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:54210.service: Deactivated successfully. Feb 9 18:41:56.021463 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:41:56.022160 systemd-logind[1127]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:41:56.022928 systemd-logind[1127]: Removed session 13. Feb 9 18:42:01.022758 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:54212.service. Feb 9 18:42:01.065682 sshd[3205]: Accepted publickey for core from 10.0.0.1 port 54212 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:01.066847 sshd[3205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:01.069962 systemd-logind[1127]: New session 14 of user core. Feb 9 18:42:01.070922 systemd[1]: Started session-14.scope. Feb 9 18:42:01.176384 sshd[3205]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:01.180373 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:54220.service. Feb 9 18:42:01.180905 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:54212.service: Deactivated successfully. Feb 9 18:42:01.182010 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:42:01.182703 systemd-logind[1127]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:42:01.183492 systemd-logind[1127]: Removed session 14. Feb 9 18:42:01.224216 sshd[3217]: Accepted publickey for core from 10.0.0.1 port 54220 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:01.225408 sshd[3217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:01.228456 systemd-logind[1127]: New session 15 of user core. Feb 9 18:42:01.229618 systemd[1]: Started session-15.scope. Feb 9 18:42:01.388667 sshd[3217]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:01.392570 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:54226.service. Feb 9 18:42:01.393304 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:54220.service: Deactivated successfully. Feb 9 18:42:01.394310 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:42:01.395033 systemd-logind[1127]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:42:01.395770 systemd-logind[1127]: Removed session 15. Feb 9 18:42:01.436708 sshd[3229]: Accepted publickey for core from 10.0.0.1 port 54226 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:01.438136 sshd[3229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:01.441456 systemd-logind[1127]: New session 16 of user core. Feb 9 18:42:01.442602 systemd[1]: Started session-16.scope. Feb 9 18:42:02.221767 sshd[3229]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:02.225694 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:54242.service. Feb 9 18:42:02.226198 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:54226.service: Deactivated successfully. Feb 9 18:42:02.227053 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:42:02.227702 systemd-logind[1127]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:42:02.228768 systemd-logind[1127]: Removed session 16. Feb 9 18:42:02.274826 sshd[3259]: Accepted publickey for core from 10.0.0.1 port 54242 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:02.276312 sshd[3259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:02.279725 systemd-logind[1127]: New session 17 of user core. Feb 9 18:42:02.280643 systemd[1]: Started session-17.scope. Feb 9 18:42:02.452205 sshd[3259]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:02.455996 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:54244.service. Feb 9 18:42:02.458072 systemd-logind[1127]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:42:02.458525 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:54242.service: Deactivated successfully. Feb 9 18:42:02.459308 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:42:02.460026 systemd-logind[1127]: Removed session 17. Feb 9 18:42:02.500342 sshd[3314]: Accepted publickey for core from 10.0.0.1 port 54244 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:02.501575 sshd[3314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:02.505455 systemd-logind[1127]: New session 18 of user core. Feb 9 18:42:02.505967 systemd[1]: Started session-18.scope. Feb 9 18:42:02.610772 sshd[3314]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:02.613965 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:54244.service: Deactivated successfully. Feb 9 18:42:02.614685 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:42:02.615297 systemd-logind[1127]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:42:02.616008 systemd-logind[1127]: Removed session 18. Feb 9 18:42:07.615300 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:37160.service. Feb 9 18:42:07.657932 sshd[3376]: Accepted publickey for core from 10.0.0.1 port 37160 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:07.659022 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:07.662825 systemd-logind[1127]: New session 19 of user core. Feb 9 18:42:07.663271 systemd[1]: Started session-19.scope. Feb 9 18:42:07.769886 sshd[3376]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:07.772284 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:37160.service: Deactivated successfully. Feb 9 18:42:07.773071 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:42:07.773596 systemd-logind[1127]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:42:07.774277 systemd-logind[1127]: Removed session 19. Feb 9 18:42:12.775290 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:49616.service. Feb 9 18:42:12.817056 sshd[3407]: Accepted publickey for core from 10.0.0.1 port 49616 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:12.818165 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:12.823547 systemd[1]: Started session-20.scope. Feb 9 18:42:12.824020 systemd-logind[1127]: New session 20 of user core. Feb 9 18:42:12.932757 sshd[3407]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:12.935273 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:49616.service: Deactivated successfully. Feb 9 18:42:12.936147 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:42:12.937533 systemd-logind[1127]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:42:12.938507 systemd-logind[1127]: Removed session 20. Feb 9 18:42:17.937406 systemd[1]: Started sshd@20-10.0.0.114:22-10.0.0.1:49632.service. Feb 9 18:42:17.980817 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 49632 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:17.981170 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:17.985697 systemd-logind[1127]: New session 21 of user core. Feb 9 18:42:17.987466 systemd[1]: Started session-21.scope. Feb 9 18:42:18.093793 sshd[3450]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:18.096278 systemd-logind[1127]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:42:18.096471 systemd[1]: sshd@20-10.0.0.114:22-10.0.0.1:49632.service: Deactivated successfully. Feb 9 18:42:18.097256 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:42:18.098126 systemd-logind[1127]: Removed session 21. Feb 9 18:42:18.610939 kubelet[1999]: E0209 18:42:18.610910 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:19.611266 kubelet[1999]: E0209 18:42:19.611218 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:22.611700 kubelet[1999]: E0209 18:42:22.611667 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:42:23.115589 systemd[1]: Started sshd@21-10.0.0.114:22-10.0.0.1:37020.service. Feb 9 18:42:23.158249 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 37020 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:42:23.159376 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:42:23.163724 systemd-logind[1127]: New session 22 of user core. Feb 9 18:42:23.164786 systemd[1]: Started session-22.scope. Feb 9 18:42:23.270267 sshd[3482]: pam_unix(sshd:session): session closed for user core Feb 9 18:42:23.272807 systemd[1]: sshd@21-10.0.0.114:22-10.0.0.1:37020.service: Deactivated successfully. Feb 9 18:42:23.273639 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:42:23.274142 systemd-logind[1127]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:42:23.274893 systemd-logind[1127]: Removed session 22. Feb 9 18:42:23.611702 kubelet[1999]: E0209 18:42:23.611662 1999 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"