Feb 9 18:23:09.730458 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:23:09.730478 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:23:09.730486 kernel: efi: EFI v2.70 by EDK II Feb 9 18:23:09.730492 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 18:23:09.730498 kernel: random: crng init done Feb 9 18:23:09.730503 kernel: ACPI: Early table checksum verification disabled Feb 9 18:23:09.730510 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 18:23:09.730517 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 18:23:09.730523 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730528 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730534 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730540 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730545 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730551 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730559 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730565 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730571 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:23:09.730577 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 18:23:09.730583 kernel: NUMA: Failed to initialise from firmware Feb 9 18:23:09.730589 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:23:09.730595 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Feb 9 18:23:09.730601 kernel: Zone ranges: Feb 9 18:23:09.730607 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:23:09.730614 kernel: DMA32 empty Feb 9 18:23:09.730620 kernel: Normal empty Feb 9 18:23:09.730626 kernel: Movable zone start for each node Feb 9 18:23:09.730632 kernel: Early memory node ranges Feb 9 18:23:09.730638 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 18:23:09.730644 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 18:23:09.730650 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 18:23:09.730656 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 18:23:09.730662 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 18:23:09.730668 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 18:23:09.730674 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 18:23:09.730680 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:23:09.730687 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 18:23:09.730693 kernel: psci: probing for conduit method from ACPI. Feb 9 18:23:09.730699 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:23:09.730705 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:23:09.730711 kernel: psci: Trusted OS migration not required Feb 9 18:23:09.730719 kernel: psci: SMC Calling Convention v1.1 Feb 9 18:23:09.730726 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 18:23:09.730734 kernel: ACPI: SRAT not present Feb 9 18:23:09.730740 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:23:09.730747 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:23:09.730753 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 18:23:09.730760 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:23:09.730766 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:23:09.730772 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:23:09.730779 kernel: CPU features: detected: Spectre-v4 Feb 9 18:23:09.730785 kernel: CPU features: detected: Spectre-BHB Feb 9 18:23:09.730792 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:23:09.730799 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:23:09.730806 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:23:09.730812 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 18:23:09.730818 kernel: Policy zone: DMA Feb 9 18:23:09.730826 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:23:09.730833 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:23:09.730849 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:23:09.730856 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:23:09.730862 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:23:09.730869 kernel: Memory: 2459144K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113144K reserved, 0K cma-reserved) Feb 9 18:23:09.730877 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:23:09.730883 kernel: trace event string verifier disabled Feb 9 18:23:09.730890 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:23:09.730897 kernel: rcu: RCU event tracing is enabled. Feb 9 18:23:09.730904 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:23:09.730910 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:23:09.730917 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:23:09.730923 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:23:09.730930 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:23:09.730936 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:23:09.730942 kernel: GICv3: 256 SPIs implemented Feb 9 18:23:09.730950 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:23:09.730962 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:23:09.730969 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:23:09.730975 kernel: GICv3: 16 PPIs implemented Feb 9 18:23:09.730981 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 18:23:09.730988 kernel: ACPI: SRAT not present Feb 9 18:23:09.730994 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 18:23:09.731001 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 18:23:09.731007 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 18:23:09.731014 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 18:23:09.731020 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 18:23:09.731026 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:23:09.731034 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:23:09.731041 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:23:09.731047 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:23:09.731054 kernel: arm-pv: using stolen time PV Feb 9 18:23:09.731061 kernel: Console: colour dummy device 80x25 Feb 9 18:23:09.731067 kernel: ACPI: Core revision 20210730 Feb 9 18:23:09.731074 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:23:09.731080 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:23:09.731087 kernel: LSM: Security Framework initializing Feb 9 18:23:09.731094 kernel: SELinux: Initializing. Feb 9 18:23:09.731101 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:23:09.731108 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:23:09.731115 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:23:09.731121 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 18:23:09.731128 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 18:23:09.731134 kernel: Remapping and enabling EFI services. Feb 9 18:23:09.731141 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:23:09.731147 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:23:09.731154 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 18:23:09.731161 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 18:23:09.731168 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:23:09.731175 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:23:09.731181 kernel: Detected PIPT I-cache on CPU2 Feb 9 18:23:09.731188 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 18:23:09.731195 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 18:23:09.731201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:23:09.731208 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 18:23:09.731214 kernel: Detected PIPT I-cache on CPU3 Feb 9 18:23:09.731221 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 18:23:09.731228 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 18:23:09.731235 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:23:09.731241 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 18:23:09.731248 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:23:09.731258 kernel: SMP: Total of 4 processors activated. Feb 9 18:23:09.731266 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:23:09.731273 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:23:09.731280 kernel: CPU features: detected: Common not Private translations Feb 9 18:23:09.731287 kernel: CPU features: detected: CRC32 instructions Feb 9 18:23:09.731294 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:23:09.731301 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:23:09.731308 kernel: CPU features: detected: Privileged Access Never Feb 9 18:23:09.731316 kernel: CPU features: detected: RAS Extension Support Feb 9 18:23:09.731323 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 18:23:09.731329 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:23:09.731336 kernel: alternatives: patching kernel code Feb 9 18:23:09.731343 kernel: devtmpfs: initialized Feb 9 18:23:09.731351 kernel: KASLR enabled Feb 9 18:23:09.731358 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:23:09.731365 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:23:09.731372 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:23:09.731378 kernel: SMBIOS 3.0.0 present. Feb 9 18:23:09.731385 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 18:23:09.731392 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:23:09.731399 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:23:09.731406 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:23:09.731415 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:23:09.731421 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:23:09.731428 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Feb 9 18:23:09.731435 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:23:09.731442 kernel: cpuidle: using governor menu Feb 9 18:23:09.731452 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:23:09.731459 kernel: ASID allocator initialised with 32768 entries Feb 9 18:23:09.731466 kernel: ACPI: bus type PCI registered Feb 9 18:23:09.731473 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:23:09.731481 kernel: Serial: AMBA PL011 UART driver Feb 9 18:23:09.731488 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:23:09.731495 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:23:09.731501 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:23:09.731508 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:23:09.731515 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:23:09.731522 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:23:09.731529 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:23:09.731536 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:23:09.731544 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:23:09.731551 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:23:09.731558 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:23:09.731565 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:23:09.731572 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:23:09.731579 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:23:09.731587 kernel: ACPI: Interpreter enabled Feb 9 18:23:09.731593 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:23:09.731600 kernel: ACPI: MCFG table detected, 1 entries Feb 9 18:23:09.731609 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:23:09.731615 kernel: printk: console [ttyAMA0] enabled Feb 9 18:23:09.731622 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:23:09.731754 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:23:09.731821 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 18:23:09.731894 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 18:23:09.732040 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 18:23:09.732156 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 18:23:09.732168 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 18:23:09.732175 kernel: PCI host bridge to bus 0000:00 Feb 9 18:23:09.732248 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 18:23:09.732307 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 18:23:09.732363 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 18:23:09.732420 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:23:09.732498 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 18:23:09.732569 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:23:09.732635 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 18:23:09.732698 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 18:23:09.732763 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:23:09.736610 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:23:09.737741 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 18:23:09.737859 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 18:23:09.738015 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 18:23:09.738110 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 18:23:09.738754 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 18:23:09.738773 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 18:23:09.738781 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 18:23:09.738788 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 18:23:09.738800 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 18:23:09.738808 kernel: iommu: Default domain type: Translated Feb 9 18:23:09.738815 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:23:09.738822 kernel: vgaarb: loaded Feb 9 18:23:09.738829 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:23:09.738879 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:23:09.738887 kernel: PTP clock support registered Feb 9 18:23:09.738895 kernel: Registered efivars operations Feb 9 18:23:09.738902 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:23:09.738909 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:23:09.738918 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:23:09.738925 kernel: pnp: PnP ACPI init Feb 9 18:23:09.739025 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 18:23:09.739060 kernel: pnp: PnP ACPI: found 1 devices Feb 9 18:23:09.739068 kernel: NET: Registered PF_INET protocol family Feb 9 18:23:09.739075 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:23:09.739083 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:23:09.739090 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:23:09.739100 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:23:09.739107 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:23:09.739115 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:23:09.739122 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:23:09.739129 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:23:09.739136 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:23:09.739162 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:23:09.739532 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 18:23:09.739543 kernel: kvm [1]: HYP mode not available Feb 9 18:23:09.739553 kernel: Initialise system trusted keyrings Feb 9 18:23:09.739561 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:23:09.739568 kernel: Key type asymmetric registered Feb 9 18:23:09.739574 kernel: Asymmetric key parser 'x509' registered Feb 9 18:23:09.739581 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:23:09.739589 kernel: io scheduler mq-deadline registered Feb 9 18:23:09.739596 kernel: io scheduler kyber registered Feb 9 18:23:09.739602 kernel: io scheduler bfq registered Feb 9 18:23:09.739609 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 18:23:09.739618 kernel: ACPI: button: Power Button [PWRB] Feb 9 18:23:09.739625 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 18:23:09.739717 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 18:23:09.739727 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:23:09.739735 kernel: thunder_xcv, ver 1.0 Feb 9 18:23:09.739742 kernel: thunder_bgx, ver 1.0 Feb 9 18:23:09.739749 kernel: nicpf, ver 1.0 Feb 9 18:23:09.739756 kernel: nicvf, ver 1.0 Feb 9 18:23:09.739827 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:23:09.739906 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:23:09 UTC (1707502989) Feb 9 18:23:09.739916 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:23:09.739923 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:23:09.739930 kernel: Segment Routing with IPv6 Feb 9 18:23:09.739937 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:23:09.739945 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:23:09.739951 kernel: Key type dns_resolver registered Feb 9 18:23:09.739965 kernel: registered taskstats version 1 Feb 9 18:23:09.739975 kernel: Loading compiled-in X.509 certificates Feb 9 18:23:09.739983 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:23:09.739990 kernel: Key type .fscrypt registered Feb 9 18:23:09.739996 kernel: Key type fscrypt-provisioning registered Feb 9 18:23:09.740004 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:23:09.740011 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:23:09.740018 kernel: ima: No architecture policies found Feb 9 18:23:09.740025 kernel: Freeing unused kernel memory: 34688K Feb 9 18:23:09.740032 kernel: Run /init as init process Feb 9 18:23:09.740040 kernel: with arguments: Feb 9 18:23:09.740047 kernel: /init Feb 9 18:23:09.740054 kernel: with environment: Feb 9 18:23:09.740061 kernel: HOME=/ Feb 9 18:23:09.740068 kernel: TERM=linux Feb 9 18:23:09.740074 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:23:09.740083 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:23:09.740092 systemd[1]: Detected virtualization kvm. Feb 9 18:23:09.740102 systemd[1]: Detected architecture arm64. Feb 9 18:23:09.740109 systemd[1]: Running in initrd. Feb 9 18:23:09.740116 systemd[1]: No hostname configured, using default hostname. Feb 9 18:23:09.740124 systemd[1]: Hostname set to . Feb 9 18:23:09.740132 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:23:09.740139 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:23:09.740147 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:23:09.740155 systemd[1]: Reached target cryptsetup.target. Feb 9 18:23:09.740164 systemd[1]: Reached target paths.target. Feb 9 18:23:09.740171 systemd[1]: Reached target slices.target. Feb 9 18:23:09.740179 systemd[1]: Reached target swap.target. Feb 9 18:23:09.740186 systemd[1]: Reached target timers.target. Feb 9 18:23:09.740194 systemd[1]: Listening on iscsid.socket. Feb 9 18:23:09.740201 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:23:09.740209 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:23:09.740218 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:23:09.740225 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:23:09.740233 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:23:09.740241 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:23:09.740248 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:23:09.740255 systemd[1]: Reached target sockets.target. Feb 9 18:23:09.740263 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:23:09.740270 systemd[1]: Finished network-cleanup.service. Feb 9 18:23:09.740278 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:23:09.740287 systemd[1]: Starting systemd-journald.service... Feb 9 18:23:09.742808 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:23:09.742822 systemd[1]: Starting systemd-resolved.service... Feb 9 18:23:09.742833 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:23:09.742860 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:23:09.742869 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:23:09.742877 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:23:09.742885 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:23:09.742894 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:23:09.742909 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:23:09.742919 kernel: audit: type=1130 audit(1707502989.730:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.742928 systemd[1]: Started systemd-resolved.service. Feb 9 18:23:09.742940 kernel: audit: type=1130 audit(1707502989.738:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.742948 systemd[1]: Reached target nss-lookup.target. Feb 9 18:23:09.742966 systemd-journald[291]: Journal started Feb 9 18:23:09.743033 systemd-journald[291]: Runtime Journal (/run/log/journal/6ac9345ced6844a4bb8f94a1d80d167e) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:23:09.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.719583 systemd-modules-load[292]: Inserted module 'overlay' Feb 9 18:23:09.745567 systemd[1]: Started systemd-journald.service. Feb 9 18:23:09.745587 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:23:09.732508 systemd-resolved[293]: Positive Trust Anchors: Feb 9 18:23:09.748614 kernel: audit: type=1130 audit(1707502989.746:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.748637 kernel: Bridge firewalling registered Feb 9 18:23:09.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.732516 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:23:09.732543 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:23:09.736686 systemd-resolved[293]: Defaulting to hostname 'linux'. Feb 9 18:23:09.749015 systemd-modules-load[292]: Inserted module 'br_netfilter' Feb 9 18:23:09.760333 kernel: audit: type=1130 audit(1707502989.755:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.755343 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:23:09.757625 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:23:09.763886 kernel: SCSI subsystem initialized Feb 9 18:23:09.767622 dracut-cmdline[308]: dracut-dracut-053 Feb 9 18:23:09.769713 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:23:09.775396 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:23:09.775416 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:23:09.775425 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:23:09.775589 systemd-modules-load[292]: Inserted module 'dm_multipath' Feb 9 18:23:09.776729 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:23:09.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.778069 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:23:09.781384 kernel: audit: type=1130 audit(1707502989.777:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.785992 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:23:09.789257 kernel: audit: type=1130 audit(1707502989.785:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.833862 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:23:09.841863 kernel: iscsi: registered transport (tcp) Feb 9 18:23:09.854869 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:23:09.854885 kernel: QLogic iSCSI HBA Driver Feb 9 18:23:09.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.887010 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:23:09.890319 kernel: audit: type=1130 audit(1707502989.886:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:09.888670 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:23:09.933873 kernel: raid6: neonx8 gen() 13804 MB/s Feb 9 18:23:09.950851 kernel: raid6: neonx8 xor() 10806 MB/s Feb 9 18:23:09.967851 kernel: raid6: neonx4 gen() 13557 MB/s Feb 9 18:23:09.984849 kernel: raid6: neonx4 xor() 11214 MB/s Feb 9 18:23:10.001849 kernel: raid6: neonx2 gen() 12969 MB/s Feb 9 18:23:10.018848 kernel: raid6: neonx2 xor() 10242 MB/s Feb 9 18:23:10.035851 kernel: raid6: neonx1 gen() 10483 MB/s Feb 9 18:23:10.052851 kernel: raid6: neonx1 xor() 8769 MB/s Feb 9 18:23:10.069880 kernel: raid6: int64x8 gen() 6291 MB/s Feb 9 18:23:10.086854 kernel: raid6: int64x8 xor() 3545 MB/s Feb 9 18:23:10.103863 kernel: raid6: int64x4 gen() 7250 MB/s Feb 9 18:23:10.120859 kernel: raid6: int64x4 xor() 3847 MB/s Feb 9 18:23:10.137863 kernel: raid6: int64x2 gen() 6147 MB/s Feb 9 18:23:10.154852 kernel: raid6: int64x2 xor() 3317 MB/s Feb 9 18:23:10.171862 kernel: raid6: int64x1 gen() 5041 MB/s Feb 9 18:23:10.189040 kernel: raid6: int64x1 xor() 2644 MB/s Feb 9 18:23:10.189057 kernel: raid6: using algorithm neonx8 gen() 13804 MB/s Feb 9 18:23:10.189066 kernel: raid6: .... xor() 10806 MB/s, rmw enabled Feb 9 18:23:10.189075 kernel: raid6: using neon recovery algorithm Feb 9 18:23:10.200035 kernel: xor: measuring software checksum speed Feb 9 18:23:10.200053 kernel: 8regs : 17300 MB/sec Feb 9 18:23:10.200868 kernel: 32regs : 20760 MB/sec Feb 9 18:23:10.202019 kernel: arm64_neon : 27882 MB/sec Feb 9 18:23:10.202031 kernel: xor: using function: arm64_neon (27882 MB/sec) Feb 9 18:23:10.256861 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:23:10.266505 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:23:10.270299 kernel: audit: type=1130 audit(1707502990.266:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:10.270318 kernel: audit: type=1334 audit(1707502990.268:10): prog-id=7 op=LOAD Feb 9 18:23:10.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:10.268000 audit: BPF prog-id=7 op=LOAD Feb 9 18:23:10.269000 audit: BPF prog-id=8 op=LOAD Feb 9 18:23:10.270631 systemd[1]: Starting systemd-udevd.service... Feb 9 18:23:10.288038 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 9 18:23:10.291316 systemd[1]: Started systemd-udevd.service. Feb 9 18:23:10.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:10.293209 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:23:10.304852 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Feb 9 18:23:10.339869 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:23:10.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:10.343569 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:23:10.376230 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:23:10.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:10.406049 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:23:10.410976 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:23:10.411019 kernel: GPT:9289727 != 19775487 Feb 9 18:23:10.412247 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:23:10.412263 kernel: GPT:9289727 != 19775487 Feb 9 18:23:10.412272 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:23:10.412847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:23:10.423864 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) Feb 9 18:23:10.427684 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:23:10.430592 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:23:10.431597 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:23:10.435769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:23:10.441770 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:23:10.443440 systemd[1]: Starting disk-uuid.service... Feb 9 18:23:10.449232 disk-uuid[563]: Primary Header is updated. Feb 9 18:23:10.449232 disk-uuid[563]: Secondary Entries is updated. Feb 9 18:23:10.449232 disk-uuid[563]: Secondary Header is updated. Feb 9 18:23:10.452867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:23:10.460857 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:23:10.462856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:23:11.463869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:23:11.464001 disk-uuid[564]: The operation has completed successfully. Feb 9 18:23:11.483950 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:23:11.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.484049 systemd[1]: Finished disk-uuid.service. Feb 9 18:23:11.488109 systemd[1]: Starting verity-setup.service... Feb 9 18:23:11.503869 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:23:11.524811 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:23:11.526873 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:23:11.528943 systemd[1]: Finished verity-setup.service. Feb 9 18:23:11.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.575473 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:23:11.576883 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:23:11.576401 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:23:11.577148 systemd[1]: Starting ignition-setup.service... Feb 9 18:23:11.579507 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:23:11.585299 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:23:11.585336 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:23:11.585351 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:23:11.593780 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:23:11.599961 systemd[1]: Finished ignition-setup.service. Feb 9 18:23:11.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.601478 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:23:11.667440 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:23:11.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.669000 audit: BPF prog-id=9 op=LOAD Feb 9 18:23:11.671196 systemd[1]: Starting systemd-networkd.service... Feb 9 18:23:11.674787 ignition[647]: Ignition 2.14.0 Feb 9 18:23:11.675425 ignition[647]: Stage: fetch-offline Feb 9 18:23:11.675976 ignition[647]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:23:11.676610 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:23:11.677556 ignition[647]: parsed url from cmdline: "" Feb 9 18:23:11.677614 ignition[647]: no config URL provided Feb 9 18:23:11.678167 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:23:11.678880 ignition[647]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:23:11.678908 ignition[647]: op(1): [started] loading QEMU firmware config module Feb 9 18:23:11.678915 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:23:11.684649 ignition[647]: op(1): [finished] loading QEMU firmware config module Feb 9 18:23:11.684683 ignition[647]: QEMU firmware config was not found. Ignoring... Feb 9 18:23:11.691332 systemd-networkd[740]: lo: Link UP Feb 9 18:23:11.691344 systemd-networkd[740]: lo: Gained carrier Feb 9 18:23:11.691687 systemd-networkd[740]: Enumeration completed Feb 9 18:23:11.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.691870 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:23:11.691964 systemd[1]: Started systemd-networkd.service. Feb 9 18:23:11.692724 systemd-networkd[740]: eth0: Link UP Feb 9 18:23:11.692728 systemd-networkd[740]: eth0: Gained carrier Feb 9 18:23:11.693313 systemd[1]: Reached target network.target. Feb 9 18:23:11.695172 systemd[1]: Starting iscsiuio.service... Feb 9 18:23:11.703807 systemd[1]: Started iscsiuio.service. Feb 9 18:23:11.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.705683 systemd[1]: Starting iscsid.service... Feb 9 18:23:11.708795 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:23:11.708795 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:23:11.708795 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:23:11.708795 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:23:11.708795 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:23:11.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.718645 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:23:11.711572 systemd[1]: Started iscsid.service. Feb 9 18:23:11.711605 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:23:11.715982 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:23:11.725835 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:23:11.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.726749 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:23:11.727964 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:23:11.729259 systemd[1]: Reached target remote-fs.target. Feb 9 18:23:11.731147 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:23:11.738431 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:23:11.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.773234 ignition[647]: parsing config with SHA512: 47c6de7d0498dca842072634798e580b2462ea0105ca6798c26bb57d22e0fcfee5f33e5b04f10bd5bb9e2b2c7653a79879f71525a3c570abd09949f90ceb63b6 Feb 9 18:23:11.810752 unknown[647]: fetched base config from "system" Feb 9 18:23:11.810764 unknown[647]: fetched user config from "qemu" Feb 9 18:23:11.811379 ignition[647]: fetch-offline: fetch-offline passed Feb 9 18:23:11.811444 ignition[647]: Ignition finished successfully Feb 9 18:23:11.812407 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:23:11.813275 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:23:11.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.814049 systemd[1]: Starting ignition-kargs.service... Feb 9 18:23:11.822991 ignition[761]: Ignition 2.14.0 Feb 9 18:23:11.823001 ignition[761]: Stage: kargs Feb 9 18:23:11.823100 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:23:11.823110 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:23:11.824234 ignition[761]: kargs: kargs passed Feb 9 18:23:11.824282 ignition[761]: Ignition finished successfully Feb 9 18:23:11.827380 systemd[1]: Finished ignition-kargs.service. Feb 9 18:23:11.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.828785 systemd[1]: Starting ignition-disks.service... Feb 9 18:23:11.835891 ignition[767]: Ignition 2.14.0 Feb 9 18:23:11.835900 ignition[767]: Stage: disks Feb 9 18:23:11.836009 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:23:11.836019 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:23:11.837132 ignition[767]: disks: disks passed Feb 9 18:23:11.837178 ignition[767]: Ignition finished successfully Feb 9 18:23:11.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.839032 systemd[1]: Finished ignition-disks.service. Feb 9 18:23:11.839983 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:23:11.840915 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:23:11.841870 systemd[1]: Reached target local-fs.target. Feb 9 18:23:11.842969 systemd[1]: Reached target sysinit.target. Feb 9 18:23:11.843893 systemd[1]: Reached target basic.target. Feb 9 18:23:11.845640 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:23:11.856467 systemd-fsck[775]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 18:23:11.860434 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:23:11.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.862280 systemd[1]: Mounting sysroot.mount... Feb 9 18:23:11.868857 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:23:11.868910 systemd[1]: Mounted sysroot.mount. Feb 9 18:23:11.869641 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:23:11.871618 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:23:11.872509 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:23:11.872544 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:23:11.872564 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:23:11.874162 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:23:11.875670 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:23:11.879703 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:23:11.883048 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:23:11.886402 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:23:11.890391 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:23:11.914189 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:23:11.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.915657 systemd[1]: Starting ignition-mount.service... Feb 9 18:23:11.916768 systemd[1]: Starting sysroot-boot.service... Feb 9 18:23:11.921326 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:23:11.929685 ignition[828]: INFO : Ignition 2.14.0 Feb 9 18:23:11.929685 ignition[828]: INFO : Stage: mount Feb 9 18:23:11.931478 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:23:11.931478 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:23:11.931478 ignition[828]: INFO : mount: mount passed Feb 9 18:23:11.931478 ignition[828]: INFO : Ignition finished successfully Feb 9 18:23:11.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:11.933238 systemd[1]: Finished ignition-mount.service. Feb 9 18:23:11.936565 systemd[1]: Finished sysroot-boot.service. Feb 9 18:23:11.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:12.535588 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:23:12.540858 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 9 18:23:12.542198 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:23:12.542214 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:23:12.542223 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:23:12.545235 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:23:12.546745 systemd[1]: Starting ignition-files.service... Feb 9 18:23:12.560651 ignition[856]: INFO : Ignition 2.14.0 Feb 9 18:23:12.560651 ignition[856]: INFO : Stage: files Feb 9 18:23:12.562279 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:23:12.562279 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:23:12.562279 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:23:12.566225 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:23:12.566225 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:23:12.568991 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:23:12.570300 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:23:12.570300 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:23:12.569795 unknown[856]: wrote ssh authorized keys file for user: core Feb 9 18:23:12.574199 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:23:12.574199 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:23:12.623785 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:23:12.679096 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:23:12.680580 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 18:23:12.680580 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:23:13.014982 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:23:13.072029 systemd-networkd[740]: eth0: Gained IPv6LL Feb 9 18:23:13.171218 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 18:23:13.173295 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 18:23:13.173295 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 18:23:13.173295 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 18:23:13.410718 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:23:13.781536 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 18:23:13.784023 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 18:23:13.784023 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:23:13.784023 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:23:13.784023 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:23:13.784023 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:23:13.901574 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:23:14.188882 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 6a5c9c02a29126949f096415bb1761a0c0ad44168e2ab3d0409982701da58f96223bec354828ddf958e945ef1ce63c0ad41e77cbcbcce0756163e71b4fbae432 Feb 9 18:23:14.188882 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:23:14.192024 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:23:14.192024 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:23:14.217470 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:23:15.128625 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 9 18:23:15.130852 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:23:15.130852 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:23:15.130852 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:23:15.151879 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 18:23:15.509343 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 9 18:23:15.511375 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:23:15.511375 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:23:15.511375 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 18:23:15.819973 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 18:23:15.863972 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:23:15.865412 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:23:15.865412 ignition[856]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 9 18:23:15.865412 ignition[856]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:23:15.865412 ignition[856]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:23:15.865412 ignition[856]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 9 18:23:15.865412 ignition[856]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Feb 9 18:23:15.865412 ignition[856]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:23:15.865412 ignition[856]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(19): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:23:15.890255 ignition[856]: INFO : files: op(19): op(1a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:23:15.907258 ignition[856]: INFO : files: op(19): op(1a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:23:15.907258 ignition[856]: INFO : files: op(19): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:23:15.907258 ignition[856]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:23:15.907258 ignition[856]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:23:15.907258 ignition[856]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:23:15.907258 ignition[856]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:23:15.907258 ignition[856]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:23:15.907258 ignition[856]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:23:15.907258 ignition[856]: INFO : files: files passed Feb 9 18:23:15.907258 ignition[856]: INFO : Ignition finished successfully Feb 9 18:23:15.928777 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 18:23:15.928799 kernel: audit: type=1130 audit(1707502995.907:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.928810 kernel: audit: type=1130 audit(1707502995.918:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.928820 kernel: audit: type=1131 audit(1707502995.918:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.928832 kernel: audit: type=1130 audit(1707502995.923:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.906081 systemd[1]: Finished ignition-files.service. Feb 9 18:23:15.908898 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:23:15.911989 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:23:15.932178 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:23:15.912698 systemd[1]: Starting ignition-quench.service... Feb 9 18:23:15.933992 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:23:15.915718 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:23:15.915808 systemd[1]: Finished ignition-quench.service. Feb 9 18:23:15.918500 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:23:15.924138 systemd[1]: Reached target ignition-complete.target. Feb 9 18:23:15.928487 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:23:15.941116 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:23:15.941203 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:23:15.946560 kernel: audit: type=1130 audit(1707502995.941:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.946582 kernel: audit: type=1131 audit(1707502995.941:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.942914 systemd[1]: Reached target initrd-fs.target. Feb 9 18:23:15.947285 systemd[1]: Reached target initrd.target. Feb 9 18:23:15.948532 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:23:15.949293 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:23:15.959802 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:23:15.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.961403 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:23:15.964038 kernel: audit: type=1130 audit(1707502995.960:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.969432 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:23:15.970321 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:23:15.971579 systemd[1]: Stopped target timers.target. Feb 9 18:23:15.972698 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:23:15.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.972803 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:23:15.977101 kernel: audit: type=1131 audit(1707502995.972:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.973898 systemd[1]: Stopped target initrd.target. Feb 9 18:23:15.976727 systemd[1]: Stopped target basic.target. Feb 9 18:23:15.977804 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:23:15.979017 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:23:15.980130 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:23:15.981396 systemd[1]: Stopped target remote-fs.target. Feb 9 18:23:15.982590 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:23:15.983820 systemd[1]: Stopped target sysinit.target. Feb 9 18:23:15.984931 systemd[1]: Stopped target local-fs.target. Feb 9 18:23:15.986065 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:23:15.987187 systemd[1]: Stopped target swap.target. Feb 9 18:23:15.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.988227 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:23:15.992639 kernel: audit: type=1131 audit(1707502995.989:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.988334 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:23:15.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.989486 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:23:15.996659 kernel: audit: type=1131 audit(1707502995.993:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:15.992152 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:23:15.992253 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:23:15.993500 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:23:15.993594 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:23:15.996387 systemd[1]: Stopped target paths.target. Feb 9 18:23:15.997376 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:23:16.001886 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:23:16.002804 systemd[1]: Stopped target slices.target. Feb 9 18:23:16.003983 systemd[1]: Stopped target sockets.target. Feb 9 18:23:16.005079 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:23:16.005145 systemd[1]: Closed iscsid.socket. Feb 9 18:23:16.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.006150 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:23:16.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.006245 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:23:16.007558 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:23:16.007648 systemd[1]: Stopped ignition-files.service. Feb 9 18:23:16.009438 systemd[1]: Stopping ignition-mount.service... Feb 9 18:23:16.011141 systemd[1]: Stopping iscsiuio.service... Feb 9 18:23:16.012831 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:23:16.013727 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:23:16.013869 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:23:16.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.016869 ignition[897]: INFO : Ignition 2.14.0 Feb 9 18:23:16.016869 ignition[897]: INFO : Stage: umount Feb 9 18:23:16.015052 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:23:16.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.019959 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:23:16.019959 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:23:16.019959 ignition[897]: INFO : umount: umount passed Feb 9 18:23:16.019959 ignition[897]: INFO : Ignition finished successfully Feb 9 18:23:16.015139 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:23:16.017693 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:23:16.017779 systemd[1]: Stopped iscsiuio.service. Feb 9 18:23:16.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.019637 systemd[1]: Stopped target network.target. Feb 9 18:23:16.020772 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:23:16.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.020806 systemd[1]: Closed iscsiuio.socket. Feb 9 18:23:16.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.022247 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:23:16.023744 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:23:16.025771 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:23:16.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.026262 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:23:16.026338 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:23:16.027823 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:23:16.028125 systemd[1]: Stopped ignition-mount.service. Feb 9 18:23:16.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.029522 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:23:16.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.029599 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:23:16.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.029896 systemd-networkd[740]: eth0: DHCPv6 lease lost Feb 9 18:23:16.038000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:23:16.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.031445 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:23:16.031520 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:23:16.033045 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:23:16.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.033072 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:23:16.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.033817 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:23:16.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.035161 systemd[1]: Stopped ignition-disks.service. Feb 9 18:23:16.036337 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:23:16.036374 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:23:16.037281 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:23:16.037314 systemd[1]: Stopped ignition-setup.service. Feb 9 18:23:16.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.038439 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:23:16.038474 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:23:16.040096 systemd[1]: Stopping network-cleanup.service... Feb 9 18:23:16.041169 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:23:16.041221 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:23:16.054000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:23:16.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.042276 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:23:16.042311 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:23:16.043801 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:23:16.043850 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:23:16.044823 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:23:16.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.048855 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:23:16.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.049270 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:23:16.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.049355 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:23:16.053696 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:23:16.053782 systemd[1]: Stopped network-cleanup.service. Feb 9 18:23:16.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.056685 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:23:16.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.056790 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:23:16.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.057976 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:23:16.058012 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:23:16.058854 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:23:16.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.058883 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:23:16.060011 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:23:16.060049 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:23:16.061011 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:23:16.061045 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:23:16.061949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:23:16.061985 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:23:16.063922 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:23:16.064964 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:23:16.065021 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:23:16.066618 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:23:16.066655 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:23:16.067509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:23:16.067548 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:23:16.069597 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:23:16.070060 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:23:16.070133 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:23:16.071661 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:23:16.073629 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:23:16.079892 systemd[1]: Switching root. Feb 9 18:23:16.093159 iscsid[746]: iscsid shutting down. Feb 9 18:23:16.093666 systemd-journald[291]: Journal stopped Feb 9 18:23:18.161662 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Feb 9 18:23:18.161717 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:23:18.161729 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:23:18.161739 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:23:18.161752 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:23:18.161761 kernel: SELinux: policy capability open_perms=1 Feb 9 18:23:18.161774 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:23:18.161783 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:23:18.161792 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:23:18.161803 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:23:18.161812 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:23:18.161821 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:23:18.161831 systemd[1]: Successfully loaded SELinux policy in 31.661ms. Feb 9 18:23:18.161869 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.580ms. Feb 9 18:23:18.161884 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:23:18.161895 systemd[1]: Detected virtualization kvm. Feb 9 18:23:18.161904 systemd[1]: Detected architecture arm64. Feb 9 18:23:18.161915 systemd[1]: Detected first boot. Feb 9 18:23:18.161926 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:23:18.161943 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:23:18.161956 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:23:18.161968 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:23:18.161979 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:23:18.161991 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:23:18.162003 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:23:18.162014 systemd[1]: Stopped iscsid.service. Feb 9 18:23:18.162026 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:23:18.162037 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:23:18.162050 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:23:18.162061 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:23:18.162071 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:23:18.162104 systemd[1]: Created slice system-getty.slice. Feb 9 18:23:18.162115 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:23:18.162126 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:23:18.162140 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:23:18.162150 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:23:18.162173 systemd[1]: Created slice user.slice. Feb 9 18:23:18.162185 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:23:18.162196 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:23:18.162207 systemd[1]: Set up automount boot.automount. Feb 9 18:23:18.162219 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:23:18.162229 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:23:18.162239 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:23:18.162250 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:23:18.162260 systemd[1]: Reached target integritysetup.target. Feb 9 18:23:18.162271 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:23:18.162281 systemd[1]: Reached target remote-fs.target. Feb 9 18:23:18.162291 systemd[1]: Reached target slices.target. Feb 9 18:23:18.162303 systemd[1]: Reached target swap.target. Feb 9 18:23:18.162313 systemd[1]: Reached target torcx.target. Feb 9 18:23:18.162324 systemd[1]: Reached target veritysetup.target. Feb 9 18:23:18.162335 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:23:18.162345 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:23:18.162356 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:23:18.162366 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:23:18.162377 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:23:18.162387 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:23:18.162397 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:23:18.162409 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:23:18.162419 systemd[1]: Mounting media.mount... Feb 9 18:23:18.162429 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:23:18.162441 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:23:18.162451 systemd[1]: Mounting tmp.mount... Feb 9 18:23:18.162462 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:23:18.162472 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:23:18.162482 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:23:18.162493 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:23:18.162504 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:23:18.162514 systemd[1]: Starting modprobe@drm.service... Feb 9 18:23:18.162524 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:23:18.162534 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:23:18.162544 systemd[1]: Starting modprobe@loop.service... Feb 9 18:23:18.162555 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:23:18.162565 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:23:18.162576 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:23:18.162586 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:23:18.162597 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:23:18.162607 kernel: fuse: init (API version 7.34) Feb 9 18:23:18.162617 kernel: loop: module loaded Feb 9 18:23:18.162627 systemd[1]: Stopped systemd-journald.service. Feb 9 18:23:18.162637 systemd[1]: Starting systemd-journald.service... Feb 9 18:23:18.162647 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:23:18.162658 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:23:18.162668 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:23:18.162678 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:23:18.162691 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:23:18.162701 systemd[1]: Stopped verity-setup.service. Feb 9 18:23:18.162711 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:23:18.162721 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:23:18.162731 systemd[1]: Mounted media.mount. Feb 9 18:23:18.162741 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:23:18.162751 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:23:18.162762 systemd[1]: Mounted tmp.mount. Feb 9 18:23:18.162772 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:23:18.162784 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:23:18.162796 systemd-journald[1001]: Journal started Feb 9 18:23:18.162846 systemd-journald[1001]: Runtime Journal (/run/log/journal/6ac9345ced6844a4bb8f94a1d80d167e) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:23:16.160000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:23:16.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:23:16.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:23:16.340000 audit: BPF prog-id=10 op=LOAD Feb 9 18:23:16.340000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:23:16.340000 audit: BPF prog-id=11 op=LOAD Feb 9 18:23:16.340000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:23:16.378000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:23:16.378000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cd8d4 a1=4000150de0 a2=40001570c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:23:16.378000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:23:16.379000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:23:16.379000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cd9b9 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:23:16.379000 audit: CWD cwd="/" Feb 9 18:23:16.379000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:23:16.379000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:23:16.379000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:23:18.038000 audit: BPF prog-id=12 op=LOAD Feb 9 18:23:18.039000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:23:18.039000 audit: BPF prog-id=13 op=LOAD Feb 9 18:23:18.039000 audit: BPF prog-id=14 op=LOAD Feb 9 18:23:18.039000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:23:18.039000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:23:18.039000 audit: BPF prog-id=15 op=LOAD Feb 9 18:23:18.039000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:23:18.039000 audit: BPF prog-id=16 op=LOAD Feb 9 18:23:18.039000 audit: BPF prog-id=17 op=LOAD Feb 9 18:23:18.039000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:23:18.039000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:23:18.040000 audit: BPF prog-id=18 op=LOAD Feb 9 18:23:18.040000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:23:18.040000 audit: BPF prog-id=19 op=LOAD Feb 9 18:23:18.040000 audit: BPF prog-id=20 op=LOAD Feb 9 18:23:18.040000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:23:18.040000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:23:18.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.051000 audit: BPF prog-id=18 op=UNLOAD Feb 9 18:23:18.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.134000 audit: BPF prog-id=21 op=LOAD Feb 9 18:23:18.135000 audit: BPF prog-id=22 op=LOAD Feb 9 18:23:18.135000 audit: BPF prog-id=23 op=LOAD Feb 9 18:23:18.135000 audit: BPF prog-id=19 op=UNLOAD Feb 9 18:23:18.135000 audit: BPF prog-id=20 op=UNLOAD Feb 9 18:23:18.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.163856 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:23:18.159000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:23:18.159000 audit[1001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe59c82a0 a2=4000 a3=1 items=0 ppid=1 pid=1001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:23:18.159000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:23:18.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.038583 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:23:16.377654 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:23:18.038594 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:23:16.377978 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:23:18.042287 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:23:16.377998 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:23:16.378029 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:23:16.378039 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:23:16.378070 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:23:16.378082 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:23:16.378284 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:23:16.378319 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:23:16.378332 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:23:16.378746 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:23:16.378778 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:23:18.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:16.378797 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:23:16.378811 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:23:16.378828 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:23:16.378858 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:23:17.795107 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:17Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:23:17.795365 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:17Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:23:17.795464 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:17Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:23:17.795617 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:17Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:23:17.795666 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:17Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:23:17.795722 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T18:23:17Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:23:18.166496 systemd[1]: Started systemd-journald.service. Feb 9 18:23:18.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.167208 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:23:18.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.168292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:23:18.168437 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:23:18.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.169474 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:23:18.169610 systemd[1]: Finished modprobe@drm.service. Feb 9 18:23:18.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.170621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:23:18.170766 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:23:18.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.171865 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:23:18.172014 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:23:18.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.173022 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:23:18.173170 systemd[1]: Finished modprobe@loop.service. Feb 9 18:23:18.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.174349 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:23:18.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.175563 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:23:18.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.176776 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:23:18.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.178218 systemd[1]: Reached target network-pre.target. Feb 9 18:23:18.180190 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:23:18.182323 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:23:18.183047 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:23:18.185785 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:23:18.187442 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:23:18.188176 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:23:18.189109 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:23:18.189744 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:23:18.190781 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:23:18.194267 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:23:18.197783 systemd-journald[1001]: Time spent on flushing to /var/log/journal/6ac9345ced6844a4bb8f94a1d80d167e is 14.698ms for 1039 entries. Feb 9 18:23:18.197783 systemd-journald[1001]: System Journal (/var/log/journal/6ac9345ced6844a4bb8f94a1d80d167e) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:23:18.224054 systemd-journald[1001]: Received client request to flush runtime journal. Feb 9 18:23:18.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.199280 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:23:18.225339 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:23:18.200421 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:23:18.203404 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:23:18.205507 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:23:18.206818 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:23:18.208010 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:23:18.208948 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:23:18.223816 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:23:18.224998 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:23:18.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.226919 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:23:18.244299 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:23:18.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.557439 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:23:18.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.558000 audit: BPF prog-id=24 op=LOAD Feb 9 18:23:18.558000 audit: BPF prog-id=25 op=LOAD Feb 9 18:23:18.558000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:23:18.558000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:23:18.559672 systemd[1]: Starting systemd-udevd.service... Feb 9 18:23:18.579358 systemd-udevd[1036]: Using default interface naming scheme 'v252'. Feb 9 18:23:18.590990 systemd[1]: Started systemd-udevd.service. Feb 9 18:23:18.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.593000 audit: BPF prog-id=26 op=LOAD Feb 9 18:23:18.593700 systemd[1]: Starting systemd-networkd.service... Feb 9 18:23:18.604000 audit: BPF prog-id=27 op=LOAD Feb 9 18:23:18.606000 audit: BPF prog-id=28 op=LOAD Feb 9 18:23:18.606000 audit: BPF prog-id=29 op=LOAD Feb 9 18:23:18.607905 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:23:18.619827 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 18:23:18.638071 systemd[1]: Started systemd-userdbd.service. Feb 9 18:23:18.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.650474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:23:18.682990 systemd-networkd[1039]: lo: Link UP Feb 9 18:23:18.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.682997 systemd-networkd[1039]: lo: Gained carrier Feb 9 18:23:18.683314 systemd-networkd[1039]: Enumeration completed Feb 9 18:23:18.683411 systemd-networkd[1039]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:23:18.683417 systemd[1]: Started systemd-networkd.service. Feb 9 18:23:18.691734 systemd-networkd[1039]: eth0: Link UP Feb 9 18:23:18.691747 systemd-networkd[1039]: eth0: Gained carrier Feb 9 18:23:18.708209 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:23:18.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.710073 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:23:18.717991 systemd-networkd[1039]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:23:18.725094 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:23:18.750564 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:23:18.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.751371 systemd[1]: Reached target cryptsetup.target. Feb 9 18:23:18.753046 systemd[1]: Starting lvm2-activation.service... Feb 9 18:23:18.756831 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:23:18.789580 systemd[1]: Finished lvm2-activation.service. Feb 9 18:23:18.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.790306 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:23:18.790940 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:23:18.790968 systemd[1]: Reached target local-fs.target. Feb 9 18:23:18.791511 systemd[1]: Reached target machines.target. Feb 9 18:23:18.793132 systemd[1]: Starting ldconfig.service... Feb 9 18:23:18.794004 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:23:18.794068 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:23:18.795676 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:23:18.797853 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:23:18.799975 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:23:18.801649 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:23:18.801713 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:23:18.803681 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:23:18.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.805928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:23:18.807204 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1072 (bootctl) Feb 9 18:23:18.808297 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:23:18.822363 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:23:18.823443 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:23:18.824499 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:23:18.880373 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Feb 9 18:23:18.880373 systemd-fsck[1080]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 18:23:18.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.882397 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:23:18.884824 systemd[1]: Mounting boot.mount... Feb 9 18:23:18.913612 systemd[1]: Mounted boot.mount. Feb 9 18:23:18.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.921452 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:23:18.935707 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:23:18.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.989783 ldconfig[1071]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:23:18.995179 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:23:18.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:18.997449 systemd[1]: Starting audit-rules.service... Feb 9 18:23:18.999217 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:23:19.001267 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:23:19.008000 audit: BPF prog-id=30 op=LOAD Feb 9 18:23:19.010960 systemd[1]: Starting systemd-resolved.service... Feb 9 18:23:19.015000 audit: BPF prog-id=31 op=LOAD Feb 9 18:23:19.016690 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:23:19.019009 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:23:19.020702 systemd[1]: Finished ldconfig.service. Feb 9 18:23:19.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:19.022000 audit[1096]: SYSTEM_BOOT pid=1096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:23:19.027929 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:23:19.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:19.034475 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:23:19.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:19.037073 systemd[1]: Starting systemd-update-done.service... Feb 9 18:23:19.038303 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:23:19.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:19.039628 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:23:19.045163 systemd[1]: Finished systemd-update-done.service. Feb 9 18:23:19.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:19.092073 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:23:19.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:23:19.093127 systemd[1]: Reached target time-set.target. Feb 9 18:23:19.094087 systemd-resolved[1089]: Positive Trust Anchors: Feb 9 18:23:19.094330 systemd-resolved[1089]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:23:19.094340 systemd-timesyncd[1095]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:23:19.094401 systemd-timesyncd[1095]: Initial clock synchronization to Fri 2024-02-09 18:23:19.486571 UTC. Feb 9 18:23:19.094706 systemd-resolved[1089]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:23:19.095000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:23:19.095000 audit[1106]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff37cc400 a2=420 a3=0 items=0 ppid=1085 pid=1106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:23:19.095000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:23:19.096116 augenrules[1106]: No rules Feb 9 18:23:19.096906 systemd[1]: Finished audit-rules.service. Feb 9 18:23:19.115742 systemd-resolved[1089]: Defaulting to hostname 'linux'. Feb 9 18:23:19.117293 systemd[1]: Started systemd-resolved.service. Feb 9 18:23:19.118197 systemd[1]: Reached target network.target. Feb 9 18:23:19.118979 systemd[1]: Reached target nss-lookup.target. Feb 9 18:23:19.119762 systemd[1]: Reached target sysinit.target. Feb 9 18:23:19.120671 systemd[1]: Started motdgen.path. Feb 9 18:23:19.121439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:23:19.122687 systemd[1]: Started logrotate.timer. Feb 9 18:23:19.123532 systemd[1]: Started mdadm.timer. Feb 9 18:23:19.124300 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:23:19.125156 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:23:19.125198 systemd[1]: Reached target paths.target. Feb 9 18:23:19.125924 systemd[1]: Reached target timers.target. Feb 9 18:23:19.127033 systemd[1]: Listening on dbus.socket. Feb 9 18:23:19.128890 systemd[1]: Starting docker.socket... Feb 9 18:23:19.133373 systemd[1]: Listening on sshd.socket. Feb 9 18:23:19.134332 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:23:19.135071 systemd[1]: Listening on docker.socket. Feb 9 18:23:19.135926 systemd[1]: Reached target sockets.target. Feb 9 18:23:19.136681 systemd[1]: Reached target basic.target. Feb 9 18:23:19.137467 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:23:19.137513 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:23:19.138650 systemd[1]: Starting containerd.service... Feb 9 18:23:19.140585 systemd[1]: Starting dbus.service... Feb 9 18:23:19.142611 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:23:19.145042 systemd[1]: Starting extend-filesystems.service... Feb 9 18:23:19.146011 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:23:19.147396 systemd[1]: Starting motdgen.service... Feb 9 18:23:19.149418 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:23:19.151610 systemd[1]: Starting prepare-critools.service... Feb 9 18:23:19.153712 systemd[1]: Starting prepare-helm.service... Feb 9 18:23:19.155808 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:23:19.156309 jq[1116]: false Feb 9 18:23:19.158038 systemd[1]: Starting sshd-keygen.service... Feb 9 18:23:19.164947 systemd[1]: Starting systemd-logind.service... Feb 9 18:23:19.168211 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:23:19.168286 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:23:19.168736 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:23:19.169623 systemd[1]: Starting update-engine.service... Feb 9 18:23:19.171654 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:23:19.174234 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:23:19.176011 jq[1137]: true Feb 9 18:23:19.176824 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:23:19.177784 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:23:19.181220 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:23:19.181422 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:23:19.183907 extend-filesystems[1117]: Found vda Feb 9 18:23:19.190514 extend-filesystems[1117]: Found vda1 Feb 9 18:23:19.191272 tar[1139]: ./ Feb 9 18:23:19.191272 tar[1139]: ./loopback Feb 9 18:23:19.191709 jq[1142]: true Feb 9 18:23:19.192196 tar[1141]: linux-arm64/helm Feb 9 18:23:19.193079 extend-filesystems[1117]: Found vda2 Feb 9 18:23:19.194542 extend-filesystems[1117]: Found vda3 Feb 9 18:23:19.195436 dbus-daemon[1115]: [system] SELinux support is enabled Feb 9 18:23:19.195567 systemd[1]: Started dbus.service. Feb 9 18:23:19.197220 extend-filesystems[1117]: Found usr Feb 9 18:23:19.197928 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:23:19.197962 systemd[1]: Reached target system-config.target. Feb 9 18:23:19.198707 extend-filesystems[1117]: Found vda4 Feb 9 18:23:19.198792 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:23:19.198810 systemd[1]: Reached target user-config.target. Feb 9 18:23:19.199549 extend-filesystems[1117]: Found vda6 Feb 9 18:23:19.200593 tar[1140]: crictl Feb 9 18:23:19.201329 extend-filesystems[1117]: Found vda7 Feb 9 18:23:19.201329 extend-filesystems[1117]: Found vda9 Feb 9 18:23:19.201329 extend-filesystems[1117]: Checking size of /dev/vda9 Feb 9 18:23:19.208381 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:23:19.208528 systemd[1]: Finished motdgen.service. Feb 9 18:23:19.234612 extend-filesystems[1117]: Resized partition /dev/vda9 Feb 9 18:23:19.238763 extend-filesystems[1171]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:23:19.247771 systemd-logind[1131]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 18:23:19.248819 systemd-logind[1131]: New seat seat0. Feb 9 18:23:19.249721 bash[1168]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:23:19.250825 systemd[1]: Started systemd-logind.service. Feb 9 18:23:19.252536 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:23:19.257959 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:23:19.267024 tar[1139]: ./bandwidth Feb 9 18:23:19.275914 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:23:19.295832 extend-filesystems[1171]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:23:19.295832 extend-filesystems[1171]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:23:19.295832 extend-filesystems[1171]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:23:19.301244 extend-filesystems[1117]: Resized filesystem in /dev/vda9 Feb 9 18:23:19.297403 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:23:19.302681 update_engine[1133]: I0209 18:23:19.301630 1133 main.cc:92] Flatcar Update Engine starting Feb 9 18:23:19.297599 systemd[1]: Finished extend-filesystems.service. Feb 9 18:23:19.308766 systemd[1]: Started update-engine.service. Feb 9 18:23:19.312768 update_engine[1133]: I0209 18:23:19.308921 1133 update_check_scheduler.cc:74] Next update check in 10m36s Feb 9 18:23:19.311356 systemd[1]: Started locksmithd.service. Feb 9 18:23:19.361390 tar[1139]: ./ptp Feb 9 18:23:19.373276 env[1143]: time="2024-02-09T18:23:19.373220400Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:23:19.407921 tar[1139]: ./vlan Feb 9 18:23:19.411531 env[1143]: time="2024-02-09T18:23:19.411305040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:23:19.411531 env[1143]: time="2024-02-09T18:23:19.411448520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:23:19.412811 env[1143]: time="2024-02-09T18:23:19.412687560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:23:19.412811 env[1143]: time="2024-02-09T18:23:19.412718040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.413542160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.413567520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.413582840Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.413593640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.413678280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.414056280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.414194800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.414213320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.414286840Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:23:19.414332 env[1143]: time="2024-02-09T18:23:19.414300840Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422143720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422175320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422188200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422221720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422235800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422249040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422262360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422597920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422615480Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422628840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422641800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422654360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422766520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:23:19.423302 env[1143]: time="2024-02-09T18:23:19.422833120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:23:19.423602 env[1143]: time="2024-02-09T18:23:19.423078400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:23:19.423602 env[1143]: time="2024-02-09T18:23:19.423103080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.423602 env[1143]: time="2024-02-09T18:23:19.423116040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:23:19.423602 env[1143]: time="2024-02-09T18:23:19.423222960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.423602 env[1143]: time="2024-02-09T18:23:19.423236800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.423602 env[1143]: time="2024-02-09T18:23:19.423251640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.423602 env[1143]: time="2024-02-09T18:23:19.423263040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.423602 env[1143]: time="2024-02-09T18:23:19.423274080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.423759120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.423790840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.423804040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.423816960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.423971080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.423990600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.424003520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.424015200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.424029040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.424039480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.424056200Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:23:19.424745 env[1143]: time="2024-02-09T18:23:19.424088760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:23:19.425053 env[1143]: time="2024-02-09T18:23:19.424279040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:23:19.425053 env[1143]: time="2024-02-09T18:23:19.424329880Z" level=info msg="Connect containerd service" Feb 9 18:23:19.425053 env[1143]: time="2024-02-09T18:23:19.424358200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:23:19.427680 env[1143]: time="2024-02-09T18:23:19.427653200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:23:19.427987 env[1143]: time="2024-02-09T18:23:19.427939760Z" level=info msg="Start subscribing containerd event" Feb 9 18:23:19.428033 env[1143]: time="2024-02-09T18:23:19.427993680Z" level=info msg="Start recovering state" Feb 9 18:23:19.428075 env[1143]: time="2024-02-09T18:23:19.428057840Z" level=info msg="Start event monitor" Feb 9 18:23:19.428120 env[1143]: time="2024-02-09T18:23:19.428080320Z" level=info msg="Start snapshots syncer" Feb 9 18:23:19.428120 env[1143]: time="2024-02-09T18:23:19.428091240Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:23:19.428120 env[1143]: time="2024-02-09T18:23:19.428100800Z" level=info msg="Start streaming server" Feb 9 18:23:19.428297 env[1143]: time="2024-02-09T18:23:19.428277280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:23:19.428400 env[1143]: time="2024-02-09T18:23:19.428386960Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:23:19.428561 systemd[1]: Started containerd.service. Feb 9 18:23:19.429900 env[1143]: time="2024-02-09T18:23:19.429872560Z" level=info msg="containerd successfully booted in 0.057303s" Feb 9 18:23:19.451820 tar[1139]: ./host-device Feb 9 18:23:19.499766 tar[1139]: ./tuning Feb 9 18:23:19.524756 tar[1139]: ./vrf Feb 9 18:23:19.550464 tar[1139]: ./sbr Feb 9 18:23:19.578400 tar[1139]: ./tap Feb 9 18:23:19.607704 tar[1139]: ./dhcp Feb 9 18:23:19.679190 tar[1139]: ./static Feb 9 18:23:19.691469 tar[1141]: linux-arm64/LICENSE Feb 9 18:23:19.691667 tar[1141]: linux-arm64/README.md Feb 9 18:23:19.697358 systemd[1]: Finished prepare-helm.service. Feb 9 18:23:19.703964 tar[1139]: ./firewall Feb 9 18:23:19.726594 locksmithd[1174]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:23:19.734780 tar[1139]: ./macvlan Feb 9 18:23:19.766864 tar[1139]: ./dummy Feb 9 18:23:19.773805 systemd[1]: Finished prepare-critools.service. Feb 9 18:23:19.792042 tar[1139]: ./bridge Feb 9 18:23:19.823056 tar[1139]: ./ipvlan Feb 9 18:23:19.851514 tar[1139]: ./portmap Feb 9 18:23:19.878531 tar[1139]: ./host-local Feb 9 18:23:19.912181 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:23:20.043464 sshd_keygen[1132]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:23:20.062075 systemd[1]: Finished sshd-keygen.service. Feb 9 18:23:20.064628 systemd[1]: Starting issuegen.service... Feb 9 18:23:20.069283 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:23:20.069426 systemd[1]: Finished issuegen.service. Feb 9 18:23:20.071687 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:23:20.080783 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:23:20.083073 systemd[1]: Started getty@tty1.service. Feb 9 18:23:20.085083 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:23:20.086123 systemd[1]: Reached target getty.target. Feb 9 18:23:20.086967 systemd[1]: Reached target multi-user.target. Feb 9 18:23:20.088951 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:23:20.095749 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:23:20.095918 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:23:20.096966 systemd[1]: Startup finished in 563ms (kernel) + 6.549s (initrd) + 3.975s (userspace) = 11.088s. Feb 9 18:23:20.709946 systemd-networkd[1039]: eth0: Gained IPv6LL Feb 9 18:23:22.869545 systemd[1]: Created slice system-sshd.slice. Feb 9 18:23:22.871328 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:48668.service. Feb 9 18:23:22.926379 sshd[1203]: Accepted publickey for core from 10.0.0.1 port 48668 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:23:22.928231 sshd[1203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:23:22.938294 systemd[1]: Created slice user-500.slice. Feb 9 18:23:22.939554 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:23:22.941417 systemd-logind[1131]: New session 1 of user core. Feb 9 18:23:22.947415 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:23:22.948891 systemd[1]: Starting user@500.service... Feb 9 18:23:22.951665 (systemd)[1206]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:23:23.011080 systemd[1206]: Queued start job for default target default.target. Feb 9 18:23:23.011554 systemd[1206]: Reached target paths.target. Feb 9 18:23:23.011575 systemd[1206]: Reached target sockets.target. Feb 9 18:23:23.011586 systemd[1206]: Reached target timers.target. Feb 9 18:23:23.011596 systemd[1206]: Reached target basic.target. Feb 9 18:23:23.011646 systemd[1206]: Reached target default.target. Feb 9 18:23:23.011671 systemd[1206]: Startup finished in 54ms. Feb 9 18:23:23.011712 systemd[1]: Started user@500.service. Feb 9 18:23:23.012669 systemd[1]: Started session-1.scope. Feb 9 18:23:23.065961 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:48676.service. Feb 9 18:23:23.117468 sshd[1215]: Accepted publickey for core from 10.0.0.1 port 48676 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:23:23.119024 sshd[1215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:23:23.122482 systemd-logind[1131]: New session 2 of user core. Feb 9 18:23:23.123307 systemd[1]: Started session-2.scope. Feb 9 18:23:23.185122 sshd[1215]: pam_unix(sshd:session): session closed for user core Feb 9 18:23:23.196943 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:48682.service. Feb 9 18:23:23.197425 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:48676.service: Deactivated successfully. Feb 9 18:23:23.198127 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:23:23.198647 systemd-logind[1131]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:23:23.199491 systemd-logind[1131]: Removed session 2. Feb 9 18:23:23.242919 sshd[1220]: Accepted publickey for core from 10.0.0.1 port 48682 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:23:23.244336 sshd[1220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:23:23.248231 systemd[1]: Started session-3.scope. Feb 9 18:23:23.248781 systemd-logind[1131]: New session 3 of user core. Feb 9 18:23:23.302225 sshd[1220]: pam_unix(sshd:session): session closed for user core Feb 9 18:23:23.304959 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:48688.service. Feb 9 18:23:23.306386 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:48682.service: Deactivated successfully. Feb 9 18:23:23.307068 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:23:23.307609 systemd-logind[1131]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:23:23.308600 systemd-logind[1131]: Removed session 3. Feb 9 18:23:23.350464 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 48688 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:23:23.351765 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:23:23.355178 systemd-logind[1131]: New session 4 of user core. Feb 9 18:23:23.355926 systemd[1]: Started session-4.scope. Feb 9 18:23:23.411666 sshd[1227]: pam_unix(sshd:session): session closed for user core Feb 9 18:23:23.416256 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:48688.service: Deactivated successfully. Feb 9 18:23:23.417470 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:23:23.419644 systemd-logind[1131]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:23:23.420172 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:48702.service. Feb 9 18:23:23.422569 systemd-logind[1131]: Removed session 4. Feb 9 18:23:23.470284 sshd[1234]: Accepted publickey for core from 10.0.0.1 port 48702 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:23:23.471372 sshd[1234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:23:23.475935 systemd[1]: Started session-5.scope. Feb 9 18:23:23.476370 systemd-logind[1131]: New session 5 of user core. Feb 9 18:23:23.552690 sudo[1237]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:23:23.552921 sudo[1237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:23:24.176386 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:23:24.183979 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:23:24.185059 systemd[1]: Reached target network-online.target. Feb 9 18:23:24.186268 systemd[1]: Starting docker.service... Feb 9 18:23:24.275447 env[1255]: time="2024-02-09T18:23:24.275370882Z" level=info msg="Starting up" Feb 9 18:23:24.276861 env[1255]: time="2024-02-09T18:23:24.276838779Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:23:24.276925 env[1255]: time="2024-02-09T18:23:24.276870052Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:23:24.276925 env[1255]: time="2024-02-09T18:23:24.276888529Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:23:24.276925 env[1255]: time="2024-02-09T18:23:24.276900668Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:23:24.278734 env[1255]: time="2024-02-09T18:23:24.278702412Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:23:24.278734 env[1255]: time="2024-02-09T18:23:24.278730929Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:23:24.278806 env[1255]: time="2024-02-09T18:23:24.278744838Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:23:24.278806 env[1255]: time="2024-02-09T18:23:24.278753891Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:23:24.522983 env[1255]: time="2024-02-09T18:23:24.522578286Z" level=info msg="Loading containers: start." Feb 9 18:23:24.613887 kernel: Initializing XFRM netlink socket Feb 9 18:23:24.635509 env[1255]: time="2024-02-09T18:23:24.635472050Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:23:24.691336 systemd-networkd[1039]: docker0: Link UP Feb 9 18:23:24.700016 env[1255]: time="2024-02-09T18:23:24.699983019Z" level=info msg="Loading containers: done." Feb 9 18:23:24.719833 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2922935560-merged.mount: Deactivated successfully. Feb 9 18:23:24.725248 env[1255]: time="2024-02-09T18:23:24.725199047Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:23:24.725400 env[1255]: time="2024-02-09T18:23:24.725386361Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:23:24.725506 env[1255]: time="2024-02-09T18:23:24.725475573Z" level=info msg="Daemon has completed initialization" Feb 9 18:23:24.748090 systemd[1]: Started docker.service. Feb 9 18:23:24.755269 env[1255]: time="2024-02-09T18:23:24.755226228Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:23:24.772098 systemd[1]: Reloading. Feb 9 18:23:24.815433 /usr/lib/systemd/system-generators/torcx-generator[1398]: time="2024-02-09T18:23:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:23:24.816107 /usr/lib/systemd/system-generators/torcx-generator[1398]: time="2024-02-09T18:23:24Z" level=info msg="torcx already run" Feb 9 18:23:24.876002 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:23:24.876021 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:23:24.893611 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:23:24.962965 systemd[1]: Started kubelet.service. Feb 9 18:23:25.096383 kubelet[1434]: E0209 18:23:25.096266 1434 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 18:23:25.098793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:23:25.098943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:23:25.373996 env[1143]: time="2024-02-09T18:23:25.373880961Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 18:23:26.056705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3743910767.mount: Deactivated successfully. Feb 9 18:23:27.804915 env[1143]: time="2024-02-09T18:23:27.804865675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:27.806463 env[1143]: time="2024-02-09T18:23:27.806429052Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:27.808688 env[1143]: time="2024-02-09T18:23:27.808655150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:27.810584 env[1143]: time="2024-02-09T18:23:27.810554760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:27.811353 env[1143]: time="2024-02-09T18:23:27.811313576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb\"" Feb 9 18:23:27.820384 env[1143]: time="2024-02-09T18:23:27.820352063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 18:23:29.878409 env[1143]: time="2024-02-09T18:23:29.878353076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:29.880251 env[1143]: time="2024-02-09T18:23:29.880217688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:29.882320 env[1143]: time="2024-02-09T18:23:29.882280987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:29.884282 env[1143]: time="2024-02-09T18:23:29.884246139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:29.884947 env[1143]: time="2024-02-09T18:23:29.884917210Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f\"" Feb 9 18:23:29.895487 env[1143]: time="2024-02-09T18:23:29.895456581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 18:23:31.107410 env[1143]: time="2024-02-09T18:23:31.107363683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:31.110151 env[1143]: time="2024-02-09T18:23:31.110103020Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:31.112162 env[1143]: time="2024-02-09T18:23:31.112134657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:31.114447 env[1143]: time="2024-02-09T18:23:31.114410824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:31.115078 env[1143]: time="2024-02-09T18:23:31.115044052Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663\"" Feb 9 18:23:31.127285 env[1143]: time="2024-02-09T18:23:31.127235697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 18:23:32.214536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2356791534.mount: Deactivated successfully. Feb 9 18:23:32.724643 env[1143]: time="2024-02-09T18:23:32.724588994Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:32.726173 env[1143]: time="2024-02-09T18:23:32.726141135Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:32.728007 env[1143]: time="2024-02-09T18:23:32.727972931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:32.729766 env[1143]: time="2024-02-09T18:23:32.729737873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:32.730156 env[1143]: time="2024-02-09T18:23:32.730123121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 9 18:23:32.739753 env[1143]: time="2024-02-09T18:23:32.739723781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:23:33.238450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240932171.mount: Deactivated successfully. Feb 9 18:23:33.241452 env[1143]: time="2024-02-09T18:23:33.241415921Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:33.246099 env[1143]: time="2024-02-09T18:23:33.246066852Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:33.248212 env[1143]: time="2024-02-09T18:23:33.248185163Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:33.250171 env[1143]: time="2024-02-09T18:23:33.250135595Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:33.250772 env[1143]: time="2024-02-09T18:23:33.250744775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:23:33.268163 env[1143]: time="2024-02-09T18:23:33.268122354Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 18:23:33.927437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895691684.mount: Deactivated successfully. Feb 9 18:23:35.350193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:23:35.350369 systemd[1]: Stopped kubelet.service. Feb 9 18:23:35.352287 systemd[1]: Started kubelet.service. Feb 9 18:23:35.397004 kubelet[1489]: E0209 18:23:35.396940 1489 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 18:23:35.400959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:23:35.401081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:23:37.320162 env[1143]: time="2024-02-09T18:23:37.320108694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:37.322047 env[1143]: time="2024-02-09T18:23:37.322015465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:37.324036 env[1143]: time="2024-02-09T18:23:37.324004370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:37.324940 env[1143]: time="2024-02-09T18:23:37.324914156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:37.325921 env[1143]: time="2024-02-09T18:23:37.325893412Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace\"" Feb 9 18:23:37.335010 env[1143]: time="2024-02-09T18:23:37.334978003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 18:23:37.914199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220341482.mount: Deactivated successfully. Feb 9 18:23:38.582972 env[1143]: time="2024-02-09T18:23:38.582920565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:38.584321 env[1143]: time="2024-02-09T18:23:38.584289688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:38.586321 env[1143]: time="2024-02-09T18:23:38.586273643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:38.588218 env[1143]: time="2024-02-09T18:23:38.588186405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:38.588876 env[1143]: time="2024-02-09T18:23:38.588828278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 18:23:45.032665 systemd[1]: Stopped kubelet.service. Feb 9 18:23:45.047417 systemd[1]: Reloading. Feb 9 18:23:45.093943 /usr/lib/systemd/system-generators/torcx-generator[1604]: time="2024-02-09T18:23:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:23:45.094293 /usr/lib/systemd/system-generators/torcx-generator[1604]: time="2024-02-09T18:23:45Z" level=info msg="torcx already run" Feb 9 18:23:45.153096 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:23:45.153286 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:23:45.170975 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:23:45.237934 systemd[1]: Started kubelet.service. Feb 9 18:23:45.279807 kubelet[1642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:23:45.279807 kubelet[1642]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:23:45.279807 kubelet[1642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:23:45.280184 kubelet[1642]: I0209 18:23:45.279862 1642 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:23:47.195181 kubelet[1642]: I0209 18:23:47.195147 1642 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 18:23:47.195531 kubelet[1642]: I0209 18:23:47.195517 1642 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:23:47.195801 kubelet[1642]: I0209 18:23:47.195783 1642 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 18:23:47.203160 kubelet[1642]: I0209 18:23:47.203003 1642 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:23:47.203682 kubelet[1642]: E0209 18:23:47.203579 1642 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.208310 kubelet[1642]: W0209 18:23:47.208286 1642 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:23:47.208983 kubelet[1642]: I0209 18:23:47.208959 1642 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:23:47.209193 kubelet[1642]: I0209 18:23:47.209171 1642 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:23:47.209338 kubelet[1642]: I0209 18:23:47.209318 1642 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 18:23:47.209338 kubelet[1642]: I0209 18:23:47.209340 1642 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 18:23:47.209452 kubelet[1642]: I0209 18:23:47.209349 1642 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 18:23:47.209452 kubelet[1642]: I0209 18:23:47.209435 1642 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:23:47.209912 kubelet[1642]: I0209 18:23:47.209900 1642 kubelet.go:393] "Attempting to sync node with API server" Feb 9 18:23:47.209950 kubelet[1642]: I0209 18:23:47.209920 1642 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:23:47.209950 kubelet[1642]: I0209 18:23:47.209938 1642 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:23:47.209950 kubelet[1642]: I0209 18:23:47.209951 1642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:23:47.210402 kubelet[1642]: W0209 18:23:47.210282 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.210402 kubelet[1642]: E0209 18:23:47.210332 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.211165 kubelet[1642]: W0209 18:23:47.211122 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.211241 kubelet[1642]: E0209 18:23:47.211175 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.211923 kubelet[1642]: I0209 18:23:47.211896 1642 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:23:47.212757 kubelet[1642]: W0209 18:23:47.212729 1642 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:23:47.214495 kubelet[1642]: I0209 18:23:47.214476 1642 server.go:1232] "Started kubelet" Feb 9 18:23:47.215384 kubelet[1642]: I0209 18:23:47.215354 1642 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:23:47.215457 kubelet[1642]: I0209 18:23:47.215424 1642 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:23:47.216466 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:23:47.216535 kubelet[1642]: I0209 18:23:47.215754 1642 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 18:23:47.216535 kubelet[1642]: I0209 18:23:47.215971 1642 server.go:462] "Adding debug handlers to kubelet server" Feb 9 18:23:47.216742 kubelet[1642]: I0209 18:23:47.216725 1642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:23:47.217180 kubelet[1642]: I0209 18:23:47.217158 1642 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 18:23:47.217366 kubelet[1642]: I0209 18:23:47.217353 1642 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:23:47.217471 kubelet[1642]: I0209 18:23:47.217461 1642 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 18:23:47.217829 kubelet[1642]: E0209 18:23:47.217745 1642 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b244f4de0577cb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 23, 47, 214448587, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 23, 47, 214448587, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.26:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.26:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:23:47.218173 kubelet[1642]: E0209 18:23:47.218148 1642 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:23:47.218276 kubelet[1642]: E0209 18:23:47.218262 1642 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:23:47.218339 kubelet[1642]: E0209 18:23:47.218270 1642 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Feb 9 18:23:47.218440 kubelet[1642]: W0209 18:23:47.218155 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.218605 kubelet[1642]: E0209 18:23:47.218590 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.228924 kubelet[1642]: I0209 18:23:47.228896 1642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 18:23:47.229906 kubelet[1642]: I0209 18:23:47.229880 1642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 18:23:47.229906 kubelet[1642]: I0209 18:23:47.229906 1642 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 18:23:47.229990 kubelet[1642]: I0209 18:23:47.229922 1642 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 18:23:47.229990 kubelet[1642]: E0209 18:23:47.229972 1642 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:23:47.236045 kubelet[1642]: W0209 18:23:47.235997 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.236045 kubelet[1642]: E0209 18:23:47.236051 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:47.236933 kubelet[1642]: I0209 18:23:47.236916 1642 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:23:47.236933 kubelet[1642]: I0209 18:23:47.236931 1642 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:23:47.237040 kubelet[1642]: I0209 18:23:47.236949 1642 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:23:47.238803 kubelet[1642]: I0209 18:23:47.238782 1642 policy_none.go:49] "None policy: Start" Feb 9 18:23:47.239339 kubelet[1642]: I0209 18:23:47.239324 1642 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:23:47.239417 kubelet[1642]: I0209 18:23:47.239347 1642 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:23:47.243763 systemd[1]: Created slice kubepods.slice. Feb 9 18:23:47.248102 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:23:47.250592 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:23:47.261475 kubelet[1642]: I0209 18:23:47.261443 1642 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:23:47.261695 kubelet[1642]: I0209 18:23:47.261671 1642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:23:47.262312 kubelet[1642]: E0209 18:23:47.262263 1642 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 18:23:47.318764 kubelet[1642]: I0209 18:23:47.318729 1642 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:23:47.319097 kubelet[1642]: E0209 18:23:47.319066 1642 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Feb 9 18:23:47.330305 kubelet[1642]: I0209 18:23:47.330261 1642 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 18:23:47.331430 kubelet[1642]: I0209 18:23:47.331388 1642 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 18:23:47.332195 kubelet[1642]: I0209 18:23:47.332162 1642 topology_manager.go:215] "Topology Admit Handler" podUID="ac8f1bb9111e7248131fe5b354e3f799" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 18:23:47.336718 systemd[1]: Created slice kubepods-burstable-pod212dcc5e2f08bec92c239ac5786b7e2b.slice. Feb 9 18:23:47.360468 systemd[1]: Created slice kubepods-burstable-podd0325d16aab19669b5fea4b6623890e6.slice. Feb 9 18:23:47.379728 systemd[1]: Created slice kubepods-burstable-podac8f1bb9111e7248131fe5b354e3f799.slice. Feb 9 18:23:47.419635 kubelet[1642]: E0209 18:23:47.419604 1642 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Feb 9 18:23:47.519047 kubelet[1642]: I0209 18:23:47.519016 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac8f1bb9111e7248131fe5b354e3f799-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac8f1bb9111e7248131fe5b354e3f799\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:23:47.519179 kubelet[1642]: I0209 18:23:47.519061 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:47.519179 kubelet[1642]: I0209 18:23:47.519082 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:47.519179 kubelet[1642]: I0209 18:23:47.519101 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:23:47.519179 kubelet[1642]: I0209 18:23:47.519119 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac8f1bb9111e7248131fe5b354e3f799-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac8f1bb9111e7248131fe5b354e3f799\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:23:47.519179 kubelet[1642]: I0209 18:23:47.519139 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac8f1bb9111e7248131fe5b354e3f799-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ac8f1bb9111e7248131fe5b354e3f799\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:23:47.519289 kubelet[1642]: I0209 18:23:47.519186 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:47.519289 kubelet[1642]: I0209 18:23:47.519227 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:47.519289 kubelet[1642]: I0209 18:23:47.519249 1642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:47.519444 kubelet[1642]: E0209 18:23:47.519351 1642 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b244f4de0577cb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 23, 47, 214448587, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 23, 47, 214448587, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.26:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.26:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:23:47.520082 kubelet[1642]: I0209 18:23:47.520065 1642 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:23:47.520360 kubelet[1642]: E0209 18:23:47.520344 1642 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Feb 9 18:23:47.659102 kubelet[1642]: E0209 18:23:47.659065 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:47.659792 env[1143]: time="2024-02-09T18:23:47.659754362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,}" Feb 9 18:23:47.678405 kubelet[1642]: E0209 18:23:47.678383 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:47.678909 env[1143]: time="2024-02-09T18:23:47.678872889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,}" Feb 9 18:23:47.681445 kubelet[1642]: E0209 18:23:47.681423 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:47.681802 env[1143]: time="2024-02-09T18:23:47.681764057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ac8f1bb9111e7248131fe5b354e3f799,Namespace:kube-system,Attempt:0,}" Feb 9 18:23:47.821032 kubelet[1642]: E0209 18:23:47.820946 1642 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Feb 9 18:23:47.921343 kubelet[1642]: I0209 18:23:47.921302 1642 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:23:47.921676 kubelet[1642]: E0209 18:23:47.921644 1642 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Feb 9 18:23:48.116671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67282741.mount: Deactivated successfully. Feb 9 18:23:48.120034 env[1143]: time="2024-02-09T18:23:48.119978012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.122395 env[1143]: time="2024-02-09T18:23:48.122362669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.124296 env[1143]: time="2024-02-09T18:23:48.124233769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.125556 env[1143]: time="2024-02-09T18:23:48.125522550Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.127012 env[1143]: time="2024-02-09T18:23:48.126985334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.130210 env[1143]: time="2024-02-09T18:23:48.130182058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.133361 env[1143]: time="2024-02-09T18:23:48.133331606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.135672 env[1143]: time="2024-02-09T18:23:48.135643579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.136512 env[1143]: time="2024-02-09T18:23:48.136486681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.137311 env[1143]: time="2024-02-09T18:23:48.137277242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.138164 env[1143]: time="2024-02-09T18:23:48.138138245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.139211 env[1143]: time="2024-02-09T18:23:48.139173931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:23:48.169287 env[1143]: time="2024-02-09T18:23:48.169211559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:23:48.169287 env[1143]: time="2024-02-09T18:23:48.169251566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:23:48.169287 env[1143]: time="2024-02-09T18:23:48.169262579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:23:48.169746 env[1143]: time="2024-02-09T18:23:48.169701130Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4dca084462e5c894cc30eb2f75079a2bcda949324f80a95efafaec8a67ddbcf9 pid=1697 runtime=io.containerd.runc.v2 Feb 9 18:23:48.170847 env[1143]: time="2024-02-09T18:23:48.170775982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:23:48.170915 env[1143]: time="2024-02-09T18:23:48.170812384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:23:48.170915 env[1143]: time="2024-02-09T18:23:48.170868529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:23:48.171001 env[1143]: time="2024-02-09T18:23:48.170952667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:23:48.171046 env[1143]: time="2024-02-09T18:23:48.171017583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:23:48.171091 env[1143]: time="2024-02-09T18:23:48.171040930Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee06db13964da32e2c609e98987997a69c9895666d4a4f89f81bb9cb270fcf49 pid=1696 runtime=io.containerd.runc.v2 Feb 9 18:23:48.171118 env[1143]: time="2024-02-09T18:23:48.171052103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:23:48.171715 env[1143]: time="2024-02-09T18:23:48.171663135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15bc831cf625d65a31101026441340f3edbaf43be062a82c03bb8d91bd0da7 pid=1707 runtime=io.containerd.runc.v2 Feb 9 18:23:48.186702 systemd[1]: Started cri-containerd-4dca084462e5c894cc30eb2f75079a2bcda949324f80a95efafaec8a67ddbcf9.scope. Feb 9 18:23:48.187640 systemd[1]: Started cri-containerd-ad15bc831cf625d65a31101026441340f3edbaf43be062a82c03bb8d91bd0da7.scope. Feb 9 18:23:48.188598 systemd[1]: Started cri-containerd-ee06db13964da32e2c609e98987997a69c9895666d4a4f89f81bb9cb270fcf49.scope. Feb 9 18:23:48.259044 env[1143]: time="2024-02-09T18:23:48.258986289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ac8f1bb9111e7248131fe5b354e3f799,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dca084462e5c894cc30eb2f75079a2bcda949324f80a95efafaec8a67ddbcf9\"" Feb 9 18:23:48.259670 env[1143]: time="2024-02-09T18:23:48.259636166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee06db13964da32e2c609e98987997a69c9895666d4a4f89f81bb9cb270fcf49\"" Feb 9 18:23:48.260200 kubelet[1642]: E0209 18:23:48.260139 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:48.260649 kubelet[1642]: E0209 18:23:48.260628 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:48.263004 env[1143]: time="2024-02-09T18:23:48.262960439Z" level=info msg="CreateContainer within sandbox \"ee06db13964da32e2c609e98987997a69c9895666d4a4f89f81bb9cb270fcf49\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:23:48.263592 env[1143]: time="2024-02-09T18:23:48.263560417Z" level=info msg="CreateContainer within sandbox \"4dca084462e5c894cc30eb2f75079a2bcda949324f80a95efafaec8a67ddbcf9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:23:48.275775 env[1143]: time="2024-02-09T18:23:48.275726709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad15bc831cf625d65a31101026441340f3edbaf43be062a82c03bb8d91bd0da7\"" Feb 9 18:23:48.276346 kubelet[1642]: E0209 18:23:48.276328 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:48.278205 env[1143]: time="2024-02-09T18:23:48.278169915Z" level=info msg="CreateContainer within sandbox \"4dca084462e5c894cc30eb2f75079a2bcda949324f80a95efafaec8a67ddbcf9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"25226abb7e60c6f19720d695685e8ef90088b8d022fe17affd873d5031a5e567\"" Feb 9 18:23:48.278771 env[1143]: time="2024-02-09T18:23:48.278746186Z" level=info msg="CreateContainer within sandbox \"ad15bc831cf625d65a31101026441340f3edbaf43be062a82c03bb8d91bd0da7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:23:48.278822 env[1143]: time="2024-02-09T18:23:48.278776181Z" level=info msg="StartContainer for \"25226abb7e60c6f19720d695685e8ef90088b8d022fe17affd873d5031a5e567\"" Feb 9 18:23:48.278908 env[1143]: time="2024-02-09T18:23:48.278814105Z" level=info msg="CreateContainer within sandbox \"ee06db13964da32e2c609e98987997a69c9895666d4a4f89f81bb9cb270fcf49\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d1474d94706351224ef950a42894c4e2931c5a1683ed1a7968dc4ed72b82996\"" Feb 9 18:23:48.279219 env[1143]: time="2024-02-09T18:23:48.279184336Z" level=info msg="StartContainer for \"4d1474d94706351224ef950a42894c4e2931c5a1683ed1a7968dc4ed72b82996\"" Feb 9 18:23:48.291180 env[1143]: time="2024-02-09T18:23:48.291117356Z" level=info msg="CreateContainer within sandbox \"ad15bc831cf625d65a31101026441340f3edbaf43be062a82c03bb8d91bd0da7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"abbb4557638b47255e2cdc52560c2f1d0cfd41400e0ed370c57998873a0485ee\"" Feb 9 18:23:48.291705 env[1143]: time="2024-02-09T18:23:48.291668758Z" level=info msg="StartContainer for \"abbb4557638b47255e2cdc52560c2f1d0cfd41400e0ed370c57998873a0485ee\"" Feb 9 18:23:48.298822 systemd[1]: Started cri-containerd-25226abb7e60c6f19720d695685e8ef90088b8d022fe17affd873d5031a5e567.scope. Feb 9 18:23:48.305427 systemd[1]: Started cri-containerd-4d1474d94706351224ef950a42894c4e2931c5a1683ed1a7968dc4ed72b82996.scope. Feb 9 18:23:48.316586 systemd[1]: Started cri-containerd-abbb4557638b47255e2cdc52560c2f1d0cfd41400e0ed370c57998873a0485ee.scope. Feb 9 18:23:48.359318 env[1143]: time="2024-02-09T18:23:48.359268098Z" level=info msg="StartContainer for \"4d1474d94706351224ef950a42894c4e2931c5a1683ed1a7968dc4ed72b82996\" returns successfully" Feb 9 18:23:48.375744 env[1143]: time="2024-02-09T18:23:48.373748045Z" level=info msg="StartContainer for \"25226abb7e60c6f19720d695685e8ef90088b8d022fe17affd873d5031a5e567\" returns successfully" Feb 9 18:23:48.395959 env[1143]: time="2024-02-09T18:23:48.395804336Z" level=info msg="StartContainer for \"abbb4557638b47255e2cdc52560c2f1d0cfd41400e0ed370c57998873a0485ee\" returns successfully" Feb 9 18:23:48.497697 kubelet[1642]: W0209 18:23:48.497600 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:48.497697 kubelet[1642]: E0209 18:23:48.497681 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:48.560393 kubelet[1642]: W0209 18:23:48.560323 1642 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:48.560393 kubelet[1642]: E0209 18:23:48.560386 1642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Feb 9 18:23:48.723196 kubelet[1642]: I0209 18:23:48.723082 1642 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:23:49.242388 kubelet[1642]: E0209 18:23:49.242340 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:49.243849 kubelet[1642]: E0209 18:23:49.243811 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:49.245706 kubelet[1642]: E0209 18:23:49.245681 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:49.935704 kubelet[1642]: E0209 18:23:49.935661 1642 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 18:23:50.022019 kubelet[1642]: I0209 18:23:50.021972 1642 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:23:50.213996 kubelet[1642]: I0209 18:23:50.213870 1642 apiserver.go:52] "Watching apiserver" Feb 9 18:23:50.218159 kubelet[1642]: I0209 18:23:50.218128 1642 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:23:50.251138 kubelet[1642]: E0209 18:23:50.251112 1642 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 18:23:50.251728 kubelet[1642]: E0209 18:23:50.251715 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:50.252089 kubelet[1642]: E0209 18:23:50.252067 1642 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:50.252610 kubelet[1642]: E0209 18:23:50.252582 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:50.839900 kubelet[1642]: E0209 18:23:50.839868 1642 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 9 18:23:50.840418 kubelet[1642]: E0209 18:23:50.840402 1642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:52.857489 systemd[1]: Reloading. Feb 9 18:23:52.922763 /usr/lib/systemd/system-generators/torcx-generator[1939]: time="2024-02-09T18:23:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:23:52.922791 /usr/lib/systemd/system-generators/torcx-generator[1939]: time="2024-02-09T18:23:52Z" level=info msg="torcx already run" Feb 9 18:23:52.979433 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:23:52.979451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:23:52.996496 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:23:53.071079 systemd[1]: Stopping kubelet.service... Feb 9 18:23:53.092254 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:23:53.092475 systemd[1]: Stopped kubelet.service. Feb 9 18:23:53.092524 systemd[1]: kubelet.service: Consumed 2.234s CPU time. Feb 9 18:23:53.094261 systemd[1]: Started kubelet.service. Feb 9 18:23:53.153686 kubelet[1976]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:23:53.153686 kubelet[1976]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:23:53.153686 kubelet[1976]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:23:53.153686 kubelet[1976]: I0209 18:23:53.153650 1976 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:23:53.157948 kubelet[1976]: I0209 18:23:53.157923 1976 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 18:23:53.158082 kubelet[1976]: I0209 18:23:53.158070 1976 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:23:53.158341 kubelet[1976]: I0209 18:23:53.158323 1976 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 18:23:53.159855 kubelet[1976]: I0209 18:23:53.159820 1976 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:23:53.160945 kubelet[1976]: I0209 18:23:53.160927 1976 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:23:53.166541 kubelet[1976]: W0209 18:23:53.166508 1976 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:23:53.167275 kubelet[1976]: I0209 18:23:53.167261 1976 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:23:53.167495 kubelet[1976]: I0209 18:23:53.167481 1976 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:23:53.167643 kubelet[1976]: I0209 18:23:53.167628 1976 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 18:23:53.167711 kubelet[1976]: I0209 18:23:53.167650 1976 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 18:23:53.167711 kubelet[1976]: I0209 18:23:53.167659 1976 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 18:23:53.167711 kubelet[1976]: I0209 18:23:53.167688 1976 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:23:53.167791 kubelet[1976]: I0209 18:23:53.167768 1976 kubelet.go:393] "Attempting to sync node with API server" Feb 9 18:23:53.167791 kubelet[1976]: I0209 18:23:53.167781 1976 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:23:53.167848 kubelet[1976]: I0209 18:23:53.167803 1976 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:23:53.167848 kubelet[1976]: I0209 18:23:53.167820 1976 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:23:53.168399 kubelet[1976]: I0209 18:23:53.168381 1976 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:23:53.169109 kubelet[1976]: I0209 18:23:53.169088 1976 server.go:1232] "Started kubelet" Feb 9 18:23:53.174618 kubelet[1976]: I0209 18:23:53.174595 1976 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:23:53.174807 kubelet[1976]: E0209 18:23:53.174774 1976 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:23:53.174807 kubelet[1976]: E0209 18:23:53.174805 1976 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:23:53.176742 kubelet[1976]: I0209 18:23:53.176698 1976 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:23:53.177357 kubelet[1976]: I0209 18:23:53.177328 1976 server.go:462] "Adding debug handlers to kubelet server" Feb 9 18:23:53.178312 kubelet[1976]: I0209 18:23:53.178278 1976 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:23:53.178468 kubelet[1976]: I0209 18:23:53.178449 1976 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 18:23:53.184743 kubelet[1976]: I0209 18:23:53.181947 1976 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 18:23:53.184743 kubelet[1976]: I0209 18:23:53.182078 1976 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:23:53.184743 kubelet[1976]: I0209 18:23:53.182213 1976 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 18:23:53.198226 kubelet[1976]: I0209 18:23:53.198200 1976 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 18:23:53.199266 kubelet[1976]: I0209 18:23:53.199248 1976 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 18:23:53.199378 kubelet[1976]: I0209 18:23:53.199365 1976 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 18:23:53.199469 kubelet[1976]: I0209 18:23:53.199457 1976 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 18:23:53.199573 kubelet[1976]: E0209 18:23:53.199563 1976 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:23:53.217156 sudo[2006]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 18:23:53.217355 sudo[2006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 18:23:53.246390 kubelet[1976]: I0209 18:23:53.246360 1976 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:23:53.246390 kubelet[1976]: I0209 18:23:53.246384 1976 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:23:53.246578 kubelet[1976]: I0209 18:23:53.246404 1976 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:23:53.246578 kubelet[1976]: I0209 18:23:53.246555 1976 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:23:53.246578 kubelet[1976]: I0209 18:23:53.246578 1976 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 18:23:53.246650 kubelet[1976]: I0209 18:23:53.246584 1976 policy_none.go:49] "None policy: Start" Feb 9 18:23:53.247232 kubelet[1976]: I0209 18:23:53.247200 1976 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:23:53.247301 kubelet[1976]: I0209 18:23:53.247240 1976 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:23:53.247411 kubelet[1976]: I0209 18:23:53.247395 1976 state_mem.go:75] "Updated machine memory state" Feb 9 18:23:53.251106 kubelet[1976]: I0209 18:23:53.251084 1976 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:23:53.251299 kubelet[1976]: I0209 18:23:53.251284 1976 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:23:53.285171 kubelet[1976]: I0209 18:23:53.285144 1976 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:23:53.292697 kubelet[1976]: I0209 18:23:53.292670 1976 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 18:23:53.292936 kubelet[1976]: I0209 18:23:53.292923 1976 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:23:53.300418 kubelet[1976]: I0209 18:23:53.300392 1976 topology_manager.go:215] "Topology Admit Handler" podUID="ac8f1bb9111e7248131fe5b354e3f799" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 18:23:53.300693 kubelet[1976]: I0209 18:23:53.300672 1976 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 18:23:53.300816 kubelet[1976]: I0209 18:23:53.300799 1976 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 18:23:53.483774 kubelet[1976]: I0209 18:23:53.483749 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac8f1bb9111e7248131fe5b354e3f799-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac8f1bb9111e7248131fe5b354e3f799\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:23:53.483984 kubelet[1976]: I0209 18:23:53.483969 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac8f1bb9111e7248131fe5b354e3f799-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ac8f1bb9111e7248131fe5b354e3f799\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:23:53.484095 kubelet[1976]: I0209 18:23:53.484082 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:53.484176 kubelet[1976]: I0209 18:23:53.484166 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:53.484248 kubelet[1976]: I0209 18:23:53.484237 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:23:53.484322 kubelet[1976]: I0209 18:23:53.484312 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac8f1bb9111e7248131fe5b354e3f799-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac8f1bb9111e7248131fe5b354e3f799\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:23:53.484399 kubelet[1976]: I0209 18:23:53.484387 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:53.484479 kubelet[1976]: I0209 18:23:53.484469 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:53.484558 kubelet[1976]: I0209 18:23:53.484548 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:23:53.606600 kubelet[1976]: E0209 18:23:53.606564 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:53.608405 kubelet[1976]: E0209 18:23:53.608385 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:53.609778 kubelet[1976]: E0209 18:23:53.609749 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:53.676475 sudo[2006]: pam_unix(sudo:session): session closed for user root Feb 9 18:23:54.168714 kubelet[1976]: I0209 18:23:54.168675 1976 apiserver.go:52] "Watching apiserver" Feb 9 18:23:54.183096 kubelet[1976]: I0209 18:23:54.183072 1976 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:23:54.217389 kubelet[1976]: E0209 18:23:54.217360 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:54.218234 kubelet[1976]: E0209 18:23:54.218208 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:54.224967 kubelet[1976]: E0209 18:23:54.224949 1976 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 18:23:54.225525 kubelet[1976]: E0209 18:23:54.225511 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:54.238787 kubelet[1976]: I0209 18:23:54.238759 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.238697642 podCreationTimestamp="2024-02-09 18:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:23:54.238165115 +0000 UTC m=+1.140105432" watchObservedRunningTime="2024-02-09 18:23:54.238697642 +0000 UTC m=+1.140637959" Feb 9 18:23:54.244773 kubelet[1976]: I0209 18:23:54.244751 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.244722128 podCreationTimestamp="2024-02-09 18:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:23:54.244310641 +0000 UTC m=+1.146250958" watchObservedRunningTime="2024-02-09 18:23:54.244722128 +0000 UTC m=+1.146662445" Feb 9 18:23:54.262510 kubelet[1976]: I0209 18:23:54.262475 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.262446441 podCreationTimestamp="2024-02-09 18:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:23:54.251275458 +0000 UTC m=+1.153215775" watchObservedRunningTime="2024-02-09 18:23:54.262446441 +0000 UTC m=+1.164386758" Feb 9 18:23:55.155745 sudo[1237]: pam_unix(sudo:session): session closed for user root Feb 9 18:23:55.156984 sshd[1234]: pam_unix(sshd:session): session closed for user core Feb 9 18:23:55.159321 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:48702.service: Deactivated successfully. Feb 9 18:23:55.160134 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:23:55.160306 systemd[1]: session-5.scope: Consumed 8.379s CPU time. Feb 9 18:23:55.160700 systemd-logind[1131]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:23:55.161493 systemd-logind[1131]: Removed session 5. Feb 9 18:23:55.218072 kubelet[1976]: E0209 18:23:55.218047 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:23:57.606816 kubelet[1976]: E0209 18:23:57.606784 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:00.031606 kubelet[1976]: E0209 18:24:00.031575 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:00.225222 kubelet[1976]: E0209 18:24:00.225193 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:01.238310 kubelet[1976]: E0209 18:24:01.235660 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:02.228370 kubelet[1976]: E0209 18:24:02.228342 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:05.014995 update_engine[1133]: I0209 18:24:05.014921 1133 update_attempter.cc:509] Updating boot flags... Feb 9 18:24:06.868819 kubelet[1976]: I0209 18:24:06.868776 1976 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:24:06.869163 env[1143]: time="2024-02-09T18:24:06.869123303Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:24:06.869484 kubelet[1976]: I0209 18:24:06.869454 1976 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:24:07.600552 kubelet[1976]: I0209 18:24:07.600504 1976 topology_manager.go:215] "Topology Admit Handler" podUID="499f4d9e-7562-49b5-88b6-2e4389ed7e3a" podNamespace="kube-system" podName="kube-proxy-f6vzt" Feb 9 18:24:07.606303 systemd[1]: Created slice kubepods-besteffort-pod499f4d9e_7562_49b5_88b6_2e4389ed7e3a.slice. Feb 9 18:24:07.606629 kubelet[1976]: I0209 18:24:07.606281 1976 topology_manager.go:215] "Topology Admit Handler" podUID="0c439772-55db-4970-811b-a36c747777e4" podNamespace="kube-system" podName="cilium-c8qpc" Feb 9 18:24:07.620066 kubelet[1976]: E0209 18:24:07.620038 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:07.623708 systemd[1]: Created slice kubepods-burstable-pod0c439772_55db_4970_811b_a36c747777e4.slice. Feb 9 18:24:07.684346 kubelet[1976]: I0209 18:24:07.684301 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cni-path\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.684346 kubelet[1976]: I0209 18:24:07.684348 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p454r\" (UniqueName: \"kubernetes.io/projected/499f4d9e-7562-49b5-88b6-2e4389ed7e3a-kube-api-access-p454r\") pod \"kube-proxy-f6vzt\" (UID: \"499f4d9e-7562-49b5-88b6-2e4389ed7e3a\") " pod="kube-system/kube-proxy-f6vzt" Feb 9 18:24:07.684346 kubelet[1976]: I0209 18:24:07.684384 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cilium-cgroup\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.684597 kubelet[1976]: I0209 18:24:07.684411 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-etc-cni-netd\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.684597 kubelet[1976]: I0209 18:24:07.684444 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c439772-55db-4970-811b-a36c747777e4-hubble-tls\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.684597 kubelet[1976]: I0209 18:24:07.684466 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-lib-modules\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.684597 kubelet[1976]: I0209 18:24:07.684485 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhqv\" (UniqueName: \"kubernetes.io/projected/0c439772-55db-4970-811b-a36c747777e4-kube-api-access-2qhqv\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.685152 kubelet[1976]: I0209 18:24:07.685127 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/499f4d9e-7562-49b5-88b6-2e4389ed7e3a-xtables-lock\") pod \"kube-proxy-f6vzt\" (UID: \"499f4d9e-7562-49b5-88b6-2e4389ed7e3a\") " pod="kube-system/kube-proxy-f6vzt" Feb 9 18:24:07.685235 kubelet[1976]: I0209 18:24:07.685177 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-bpf-maps\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.685235 kubelet[1976]: I0209 18:24:07.685210 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-host-proc-sys-kernel\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.685235 kubelet[1976]: I0209 18:24:07.685233 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/499f4d9e-7562-49b5-88b6-2e4389ed7e3a-kube-proxy\") pod \"kube-proxy-f6vzt\" (UID: \"499f4d9e-7562-49b5-88b6-2e4389ed7e3a\") " pod="kube-system/kube-proxy-f6vzt" Feb 9 18:24:07.685319 kubelet[1976]: I0209 18:24:07.685257 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cilium-run\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.685319 kubelet[1976]: I0209 18:24:07.685282 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-xtables-lock\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.685319 kubelet[1976]: I0209 18:24:07.685302 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c439772-55db-4970-811b-a36c747777e4-clustermesh-secrets\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.685386 kubelet[1976]: I0209 18:24:07.685323 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c439772-55db-4970-811b-a36c747777e4-cilium-config-path\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.685386 kubelet[1976]: I0209 18:24:07.685345 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-host-proc-sys-net\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.685432 kubelet[1976]: I0209 18:24:07.685402 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/499f4d9e-7562-49b5-88b6-2e4389ed7e3a-lib-modules\") pod \"kube-proxy-f6vzt\" (UID: \"499f4d9e-7562-49b5-88b6-2e4389ed7e3a\") " pod="kube-system/kube-proxy-f6vzt" Feb 9 18:24:07.685455 kubelet[1976]: I0209 18:24:07.685441 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-hostproc\") pod \"cilium-c8qpc\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " pod="kube-system/cilium-c8qpc" Feb 9 18:24:07.746695 kubelet[1976]: I0209 18:24:07.746652 1976 topology_manager.go:215] "Topology Admit Handler" podUID="e671890d-424b-45f7-b3f7-99e54cbfc07e" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-kjffb" Feb 9 18:24:07.752129 systemd[1]: Created slice kubepods-besteffort-pode671890d_424b_45f7_b3f7_99e54cbfc07e.slice. Feb 9 18:24:07.786435 kubelet[1976]: I0209 18:24:07.786164 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e671890d-424b-45f7-b3f7-99e54cbfc07e-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-kjffb\" (UID: \"e671890d-424b-45f7-b3f7-99e54cbfc07e\") " pod="kube-system/cilium-operator-6bc8ccdb58-kjffb" Feb 9 18:24:07.787561 kubelet[1976]: I0209 18:24:07.787275 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zckrb\" (UniqueName: \"kubernetes.io/projected/e671890d-424b-45f7-b3f7-99e54cbfc07e-kube-api-access-zckrb\") pod \"cilium-operator-6bc8ccdb58-kjffb\" (UID: \"e671890d-424b-45f7-b3f7-99e54cbfc07e\") " pod="kube-system/cilium-operator-6bc8ccdb58-kjffb" Feb 9 18:24:07.921833 kubelet[1976]: E0209 18:24:07.921717 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:07.922617 env[1143]: time="2024-02-09T18:24:07.922519530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6vzt,Uid:499f4d9e-7562-49b5-88b6-2e4389ed7e3a,Namespace:kube-system,Attempt:0,}" Feb 9 18:24:07.926212 kubelet[1976]: E0209 18:24:07.926182 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:07.926878 env[1143]: time="2024-02-09T18:24:07.926845565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c8qpc,Uid:0c439772-55db-4970-811b-a36c747777e4,Namespace:kube-system,Attempt:0,}" Feb 9 18:24:07.941894 env[1143]: time="2024-02-09T18:24:07.941794293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:24:07.942004 env[1143]: time="2024-02-09T18:24:07.941912512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:24:07.942004 env[1143]: time="2024-02-09T18:24:07.941940085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:24:07.942221 env[1143]: time="2024-02-09T18:24:07.942157234Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/116d33a12ab327b0f629eb484b4ca3fb1517249eedaffb7df70f036a2a027d02 pid=2086 runtime=io.containerd.runc.v2 Feb 9 18:24:07.945032 env[1143]: time="2024-02-09T18:24:07.944964592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:24:07.945032 env[1143]: time="2024-02-09T18:24:07.945006973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:24:07.945032 env[1143]: time="2024-02-09T18:24:07.945019500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:24:07.945275 env[1143]: time="2024-02-09T18:24:07.945241650Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70 pid=2104 runtime=io.containerd.runc.v2 Feb 9 18:24:07.954705 systemd[1]: Started cri-containerd-116d33a12ab327b0f629eb484b4ca3fb1517249eedaffb7df70f036a2a027d02.scope. Feb 9 18:24:07.956871 systemd[1]: Started cri-containerd-4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70.scope. Feb 9 18:24:07.996493 env[1143]: time="2024-02-09T18:24:07.996447522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6vzt,Uid:499f4d9e-7562-49b5-88b6-2e4389ed7e3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"116d33a12ab327b0f629eb484b4ca3fb1517249eedaffb7df70f036a2a027d02\"" Feb 9 18:24:07.997121 kubelet[1976]: E0209 18:24:07.997097 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:08.000095 env[1143]: time="2024-02-09T18:24:08.000054559Z" level=info msg="CreateContainer within sandbox \"116d33a12ab327b0f629eb484b4ca3fb1517249eedaffb7df70f036a2a027d02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:24:08.007127 env[1143]: time="2024-02-09T18:24:08.007090919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c8qpc,Uid:0c439772-55db-4970-811b-a36c747777e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\"" Feb 9 18:24:08.007866 kubelet[1976]: E0209 18:24:08.007750 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:08.010653 env[1143]: time="2024-02-09T18:24:08.010619914Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:24:08.017461 env[1143]: time="2024-02-09T18:24:08.017421184Z" level=info msg="CreateContainer within sandbox \"116d33a12ab327b0f629eb484b4ca3fb1517249eedaffb7df70f036a2a027d02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd3ccd7b916a7317bafd3eeaa182546570b3781ddfc1db5f9b52b63f74da2a53\"" Feb 9 18:24:08.019418 env[1143]: time="2024-02-09T18:24:08.019380634Z" level=info msg="StartContainer for \"fd3ccd7b916a7317bafd3eeaa182546570b3781ddfc1db5f9b52b63f74da2a53\"" Feb 9 18:24:08.034271 systemd[1]: Started cri-containerd-fd3ccd7b916a7317bafd3eeaa182546570b3781ddfc1db5f9b52b63f74da2a53.scope. Feb 9 18:24:08.054219 kubelet[1976]: E0209 18:24:08.054183 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:08.055291 env[1143]: time="2024-02-09T18:24:08.054854957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-kjffb,Uid:e671890d-424b-45f7-b3f7-99e54cbfc07e,Namespace:kube-system,Attempt:0,}" Feb 9 18:24:08.069664 env[1143]: time="2024-02-09T18:24:08.069591795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:24:08.069664 env[1143]: time="2024-02-09T18:24:08.069629573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:24:08.069875 env[1143]: time="2024-02-09T18:24:08.069640418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:24:08.069875 env[1143]: time="2024-02-09T18:24:08.069759234Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4 pid=2188 runtime=io.containerd.runc.v2 Feb 9 18:24:08.079726 systemd[1]: Started cri-containerd-06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4.scope. Feb 9 18:24:08.091854 env[1143]: time="2024-02-09T18:24:08.088057082Z" level=info msg="StartContainer for \"fd3ccd7b916a7317bafd3eeaa182546570b3781ddfc1db5f9b52b63f74da2a53\" returns successfully" Feb 9 18:24:08.126244 env[1143]: time="2024-02-09T18:24:08.124700361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-kjffb,Uid:e671890d-424b-45f7-b3f7-99e54cbfc07e,Namespace:kube-system,Attempt:0,} returns sandbox id \"06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4\"" Feb 9 18:24:08.126381 kubelet[1976]: E0209 18:24:08.125420 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:08.238923 kubelet[1976]: E0209 18:24:08.238622 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:11.798624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553476085.mount: Deactivated successfully. Feb 9 18:24:14.101640 env[1143]: time="2024-02-09T18:24:14.101578189Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:24:14.102877 env[1143]: time="2024-02-09T18:24:14.102828201Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:24:14.104938 env[1143]: time="2024-02-09T18:24:14.104902752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:24:14.105408 env[1143]: time="2024-02-09T18:24:14.105377564Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 18:24:14.106706 env[1143]: time="2024-02-09T18:24:14.106675674Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:24:14.107875 env[1143]: time="2024-02-09T18:24:14.107822689Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:24:14.119126 env[1143]: time="2024-02-09T18:24:14.119088648Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\"" Feb 9 18:24:14.120862 env[1143]: time="2024-02-09T18:24:14.119642168Z" level=info msg="StartContainer for \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\"" Feb 9 18:24:14.140621 systemd[1]: Started cri-containerd-9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403.scope. Feb 9 18:24:14.246720 env[1143]: time="2024-02-09T18:24:14.246656549Z" level=info msg="StartContainer for \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\" returns successfully" Feb 9 18:24:14.253728 kubelet[1976]: E0209 18:24:14.253706 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:14.268535 systemd[1]: cri-containerd-9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403.scope: Deactivated successfully. Feb 9 18:24:14.270978 kubelet[1976]: I0209 18:24:14.270944 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f6vzt" podStartSLOduration=7.270910289 podCreationTimestamp="2024-02-09 18:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:24:08.246570825 +0000 UTC m=+15.148511142" watchObservedRunningTime="2024-02-09 18:24:14.270910289 +0000 UTC m=+21.172850606" Feb 9 18:24:14.320676 env[1143]: time="2024-02-09T18:24:14.320619805Z" level=info msg="shim disconnected" id=9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403 Feb 9 18:24:14.320676 env[1143]: time="2024-02-09T18:24:14.320667262Z" level=warning msg="cleaning up after shim disconnected" id=9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403 namespace=k8s.io Feb 9 18:24:14.320676 env[1143]: time="2024-02-09T18:24:14.320677906Z" level=info msg="cleaning up dead shim" Feb 9 18:24:14.329746 env[1143]: time="2024-02-09T18:24:14.329692809Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2406 runtime=io.containerd.runc.v2\n" Feb 9 18:24:15.116860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403-rootfs.mount: Deactivated successfully. Feb 9 18:24:15.257741 kubelet[1976]: E0209 18:24:15.257672 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:15.266877 env[1143]: time="2024-02-09T18:24:15.261285255Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:24:15.286653 env[1143]: time="2024-02-09T18:24:15.286592920Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\"" Feb 9 18:24:15.287934 env[1143]: time="2024-02-09T18:24:15.287426569Z" level=info msg="StartContainer for \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\"" Feb 9 18:24:15.303673 systemd[1]: Started cri-containerd-664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8.scope. Feb 9 18:24:15.358097 env[1143]: time="2024-02-09T18:24:15.358051963Z" level=info msg="StartContainer for \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\" returns successfully" Feb 9 18:24:15.366408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:24:15.366599 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:24:15.366757 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:24:15.368810 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:24:15.370442 systemd[1]: cri-containerd-664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8.scope: Deactivated successfully. Feb 9 18:24:15.379022 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:24:15.399162 env[1143]: time="2024-02-09T18:24:15.399112416Z" level=info msg="shim disconnected" id=664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8 Feb 9 18:24:15.399162 env[1143]: time="2024-02-09T18:24:15.399160112Z" level=warning msg="cleaning up after shim disconnected" id=664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8 namespace=k8s.io Feb 9 18:24:15.399389 env[1143]: time="2024-02-09T18:24:15.399170356Z" level=info msg="cleaning up dead shim" Feb 9 18:24:15.406191 env[1143]: time="2024-02-09T18:24:15.406146097Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:24:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2470 runtime=io.containerd.runc.v2\n" Feb 9 18:24:15.802746 env[1143]: time="2024-02-09T18:24:15.802668772Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:24:15.804747 env[1143]: time="2024-02-09T18:24:15.804700357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:24:15.807423 env[1143]: time="2024-02-09T18:24:15.807386169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:24:15.808170 env[1143]: time="2024-02-09T18:24:15.807791390Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 18:24:15.811315 env[1143]: time="2024-02-09T18:24:15.811255592Z" level=info msg="CreateContainer within sandbox \"06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:24:15.823754 env[1143]: time="2024-02-09T18:24:15.823692789Z" level=info msg="CreateContainer within sandbox \"06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\"" Feb 9 18:24:15.824234 env[1143]: time="2024-02-09T18:24:15.824195764Z" level=info msg="StartContainer for \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\"" Feb 9 18:24:15.840208 systemd[1]: Started cri-containerd-0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0.scope. Feb 9 18:24:15.911497 env[1143]: time="2024-02-09T18:24:15.911452531Z" level=info msg="StartContainer for \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\" returns successfully" Feb 9 18:24:16.117709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8-rootfs.mount: Deactivated successfully. Feb 9 18:24:16.260089 kubelet[1976]: E0209 18:24:16.260057 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:16.261758 kubelet[1976]: E0209 18:24:16.261733 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:16.264018 env[1143]: time="2024-02-09T18:24:16.263972174Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:24:16.278192 kubelet[1976]: I0209 18:24:16.278139 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-kjffb" podStartSLOduration=1.596709183 podCreationTimestamp="2024-02-09 18:24:07 +0000 UTC" firstStartedPulling="2024-02-09 18:24:08.127383154 +0000 UTC m=+15.029323431" lastFinishedPulling="2024-02-09 18:24:15.808775892 +0000 UTC m=+22.710716209" observedRunningTime="2024-02-09 18:24:16.277015319 +0000 UTC m=+23.178955636" watchObservedRunningTime="2024-02-09 18:24:16.278101961 +0000 UTC m=+23.180042278" Feb 9 18:24:16.356621 env[1143]: time="2024-02-09T18:24:16.356575263Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\"" Feb 9 18:24:16.357661 env[1143]: time="2024-02-09T18:24:16.357626053Z" level=info msg="StartContainer for \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\"" Feb 9 18:24:16.384577 systemd[1]: Started cri-containerd-fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d.scope. Feb 9 18:24:16.454068 env[1143]: time="2024-02-09T18:24:16.454009001Z" level=info msg="StartContainer for \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\" returns successfully" Feb 9 18:24:16.478900 systemd[1]: cri-containerd-fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d.scope: Deactivated successfully. Feb 9 18:24:16.511688 env[1143]: time="2024-02-09T18:24:16.511644241Z" level=info msg="shim disconnected" id=fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d Feb 9 18:24:16.512072 env[1143]: time="2024-02-09T18:24:16.512047455Z" level=warning msg="cleaning up after shim disconnected" id=fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d namespace=k8s.io Feb 9 18:24:16.512149 env[1143]: time="2024-02-09T18:24:16.512134364Z" level=info msg="cleaning up dead shim" Feb 9 18:24:16.530632 env[1143]: time="2024-02-09T18:24:16.530580149Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:24:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2562 runtime=io.containerd.runc.v2\n" Feb 9 18:24:17.127520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d-rootfs.mount: Deactivated successfully. Feb 9 18:24:17.265206 kubelet[1976]: E0209 18:24:17.265184 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:17.265620 kubelet[1976]: E0209 18:24:17.265218 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:17.267448 env[1143]: time="2024-02-09T18:24:17.267410566Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:24:17.283679 env[1143]: time="2024-02-09T18:24:17.283630077Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\"" Feb 9 18:24:17.284282 env[1143]: time="2024-02-09T18:24:17.284238592Z" level=info msg="StartContainer for \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\"" Feb 9 18:24:17.298474 systemd[1]: Started cri-containerd-848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c.scope. Feb 9 18:24:17.351546 systemd[1]: cri-containerd-848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c.scope: Deactivated successfully. Feb 9 18:24:17.354389 env[1143]: time="2024-02-09T18:24:17.354353830Z" level=info msg="StartContainer for \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\" returns successfully" Feb 9 18:24:17.356582 env[1143]: time="2024-02-09T18:24:17.356522284Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c439772_55db_4970_811b_a36c747777e4.slice/cri-containerd-848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c.scope/memory.events\": no such file or directory" Feb 9 18:24:17.374264 env[1143]: time="2024-02-09T18:24:17.374222709Z" level=info msg="shim disconnected" id=848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c Feb 9 18:24:17.374614 env[1143]: time="2024-02-09T18:24:17.374590346Z" level=warning msg="cleaning up after shim disconnected" id=848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c namespace=k8s.io Feb 9 18:24:17.374682 env[1143]: time="2024-02-09T18:24:17.374667931Z" level=info msg="cleaning up dead shim" Feb 9 18:24:17.381634 env[1143]: time="2024-02-09T18:24:17.381500038Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:24:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2616 runtime=io.containerd.runc.v2\n" Feb 9 18:24:18.124157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c-rootfs.mount: Deactivated successfully. Feb 9 18:24:18.268605 kubelet[1976]: E0209 18:24:18.268561 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:18.270982 env[1143]: time="2024-02-09T18:24:18.270941961Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:24:18.286215 env[1143]: time="2024-02-09T18:24:18.286165526Z" level=info msg="CreateContainer within sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\"" Feb 9 18:24:18.286646 env[1143]: time="2024-02-09T18:24:18.286619425Z" level=info msg="StartContainer for \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\"" Feb 9 18:24:18.302590 systemd[1]: Started cri-containerd-97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2.scope. Feb 9 18:24:18.345699 env[1143]: time="2024-02-09T18:24:18.345657473Z" level=info msg="StartContainer for \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\" returns successfully" Feb 9 18:24:18.469011 kubelet[1976]: I0209 18:24:18.468917 1976 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:24:18.486111 kubelet[1976]: I0209 18:24:18.486067 1976 topology_manager.go:215] "Topology Admit Handler" podUID="4016fd87-bbde-4b87-b5ce-74cf26580b86" podNamespace="kube-system" podName="coredns-5dd5756b68-rbx95" Feb 9 18:24:18.486952 kubelet[1976]: I0209 18:24:18.486918 1976 topology_manager.go:215] "Topology Admit Handler" podUID="5a60f2a2-a2dc-456c-b3f9-5fdfe8b7b74d" podNamespace="kube-system" podName="coredns-5dd5756b68-l9nc4" Feb 9 18:24:18.492120 systemd[1]: Created slice kubepods-burstable-pod5a60f2a2_a2dc_456c_b3f9_5fdfe8b7b74d.slice. Feb 9 18:24:18.496560 systemd[1]: Created slice kubepods-burstable-pod4016fd87_bbde_4b87_b5ce_74cf26580b86.slice. Feb 9 18:24:18.597879 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:24:18.665343 kubelet[1976]: I0209 18:24:18.665300 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4016fd87-bbde-4b87-b5ce-74cf26580b86-config-volume\") pod \"coredns-5dd5756b68-rbx95\" (UID: \"4016fd87-bbde-4b87-b5ce-74cf26580b86\") " pod="kube-system/coredns-5dd5756b68-rbx95" Feb 9 18:24:18.665479 kubelet[1976]: I0209 18:24:18.665369 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q67cx\" (UniqueName: \"kubernetes.io/projected/4016fd87-bbde-4b87-b5ce-74cf26580b86-kube-api-access-q67cx\") pod \"coredns-5dd5756b68-rbx95\" (UID: \"4016fd87-bbde-4b87-b5ce-74cf26580b86\") " pod="kube-system/coredns-5dd5756b68-rbx95" Feb 9 18:24:18.665479 kubelet[1976]: I0209 18:24:18.665461 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a60f2a2-a2dc-456c-b3f9-5fdfe8b7b74d-config-volume\") pod \"coredns-5dd5756b68-l9nc4\" (UID: \"5a60f2a2-a2dc-456c-b3f9-5fdfe8b7b74d\") " pod="kube-system/coredns-5dd5756b68-l9nc4" Feb 9 18:24:18.665541 kubelet[1976]: I0209 18:24:18.665500 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q48n5\" (UniqueName: \"kubernetes.io/projected/5a60f2a2-a2dc-456c-b3f9-5fdfe8b7b74d-kube-api-access-q48n5\") pod \"coredns-5dd5756b68-l9nc4\" (UID: \"5a60f2a2-a2dc-456c-b3f9-5fdfe8b7b74d\") " pod="kube-system/coredns-5dd5756b68-l9nc4" Feb 9 18:24:18.794933 kubelet[1976]: E0209 18:24:18.794896 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:18.795584 env[1143]: time="2024-02-09T18:24:18.795547961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-l9nc4,Uid:5a60f2a2-a2dc-456c-b3f9-5fdfe8b7b74d,Namespace:kube-system,Attempt:0,}" Feb 9 18:24:18.799063 kubelet[1976]: E0209 18:24:18.799031 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:18.799590 env[1143]: time="2024-02-09T18:24:18.799552153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rbx95,Uid:4016fd87-bbde-4b87-b5ce-74cf26580b86,Namespace:kube-system,Attempt:0,}" Feb 9 18:24:18.811905 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:24:19.274741 kubelet[1976]: E0209 18:24:19.274700 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:19.289498 kubelet[1976]: I0209 18:24:19.289444 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-c8qpc" podStartSLOduration=6.193195699 podCreationTimestamp="2024-02-09 18:24:07 +0000 UTC" firstStartedPulling="2024-02-09 18:24:08.00972669 +0000 UTC m=+14.911667007" lastFinishedPulling="2024-02-09 18:24:14.105939968 +0000 UTC m=+21.007880285" observedRunningTime="2024-02-09 18:24:19.288336859 +0000 UTC m=+26.190277176" watchObservedRunningTime="2024-02-09 18:24:19.289408977 +0000 UTC m=+26.191349294" Feb 9 18:24:20.275990 kubelet[1976]: E0209 18:24:20.275961 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:20.440774 systemd-networkd[1039]: cilium_host: Link UP Feb 9 18:24:20.441541 systemd-networkd[1039]: cilium_net: Link UP Feb 9 18:24:20.442176 systemd-networkd[1039]: cilium_net: Gained carrier Feb 9 18:24:20.442693 systemd-networkd[1039]: cilium_host: Gained carrier Feb 9 18:24:20.442921 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 18:24:20.442957 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:24:20.495944 systemd-networkd[1039]: cilium_host: Gained IPv6LL Feb 9 18:24:20.549259 systemd-networkd[1039]: cilium_vxlan: Link UP Feb 9 18:24:20.549265 systemd-networkd[1039]: cilium_vxlan: Gained carrier Feb 9 18:24:20.697903 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:34194.service. Feb 9 18:24:20.743015 sshd[2890]: Accepted publickey for core from 10.0.0.1 port 34194 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:20.744775 sshd[2890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:20.748231 systemd-logind[1131]: New session 6 of user core. Feb 9 18:24:20.749059 systemd[1]: Started session-6.scope. Feb 9 18:24:20.904869 kernel: NET: Registered PF_ALG protocol family Feb 9 18:24:20.942783 sshd[2890]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:20.946057 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:34194.service: Deactivated successfully. Feb 9 18:24:20.946737 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:24:20.947774 systemd-logind[1131]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:24:20.948480 systemd-logind[1131]: Removed session 6. Feb 9 18:24:21.168018 systemd-networkd[1039]: cilium_net: Gained IPv6LL Feb 9 18:24:21.276936 kubelet[1976]: E0209 18:24:21.276909 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:21.459974 systemd-networkd[1039]: lxc_health: Link UP Feb 9 18:24:21.473404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:24:21.472991 systemd-networkd[1039]: lxc_health: Gained carrier Feb 9 18:24:21.884530 systemd-networkd[1039]: lxc0e782a9935ce: Link UP Feb 9 18:24:21.892333 kernel: eth0: renamed from tmpe49f5 Feb 9 18:24:21.897435 systemd-networkd[1039]: lxc08d12d60e00b: Link UP Feb 9 18:24:21.898526 systemd-networkd[1039]: lxc0e782a9935ce: Gained carrier Feb 9 18:24:21.899042 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0e782a9935ce: link becomes ready Feb 9 18:24:21.904884 kernel: eth0: renamed from tmpeadb9 Feb 9 18:24:21.908124 systemd-networkd[1039]: lxc08d12d60e00b: Gained carrier Feb 9 18:24:21.909471 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc08d12d60e00b: link becomes ready Feb 9 18:24:22.278549 kubelet[1976]: E0209 18:24:22.278518 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:22.323071 systemd-networkd[1039]: cilium_vxlan: Gained IPv6LL Feb 9 18:24:22.767992 systemd-networkd[1039]: lxc_health: Gained IPv6LL Feb 9 18:24:23.216146 systemd-networkd[1039]: lxc0e782a9935ce: Gained IPv6LL Feb 9 18:24:23.280167 kubelet[1976]: E0209 18:24:23.280125 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:23.472043 systemd-networkd[1039]: lxc08d12d60e00b: Gained IPv6LL Feb 9 18:24:25.431584 env[1143]: time="2024-02-09T18:24:25.431514846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:24:25.432057 env[1143]: time="2024-02-09T18:24:25.431991281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:24:25.432057 env[1143]: time="2024-02-09T18:24:25.432007285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:24:25.432500 env[1143]: time="2024-02-09T18:24:25.432437708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49f592de95e698117bbce26f9e6f6de3797dc95f9eeafa32fc60f7ce73e1490 pid=3209 runtime=io.containerd.runc.v2 Feb 9 18:24:25.440915 env[1143]: time="2024-02-09T18:24:25.439534057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:24:25.440915 env[1143]: time="2024-02-09T18:24:25.439595431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:24:25.440915 env[1143]: time="2024-02-09T18:24:25.439605674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:24:25.440915 env[1143]: time="2024-02-09T18:24:25.439824647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eadb9570220a52e96d5aab2b9f5ffe23ca50287c666663af31713a64928c5d4c pid=3227 runtime=io.containerd.runc.v2 Feb 9 18:24:25.451121 systemd[1]: run-containerd-runc-k8s.io-e49f592de95e698117bbce26f9e6f6de3797dc95f9eeafa32fc60f7ce73e1490-runc.BPfc3R.mount: Deactivated successfully. Feb 9 18:24:25.453754 systemd[1]: Started cri-containerd-e49f592de95e698117bbce26f9e6f6de3797dc95f9eeafa32fc60f7ce73e1490.scope. Feb 9 18:24:25.457234 systemd[1]: Started cri-containerd-eadb9570220a52e96d5aab2b9f5ffe23ca50287c666663af31713a64928c5d4c.scope. Feb 9 18:24:25.492910 systemd-resolved[1089]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:24:25.500690 systemd-resolved[1089]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:24:25.520496 env[1143]: time="2024-02-09T18:24:25.520456298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-l9nc4,Uid:5a60f2a2-a2dc-456c-b3f9-5fdfe8b7b74d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e49f592de95e698117bbce26f9e6f6de3797dc95f9eeafa32fc60f7ce73e1490\"" Feb 9 18:24:25.521329 kubelet[1976]: E0209 18:24:25.521310 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:25.523238 env[1143]: time="2024-02-09T18:24:25.523207641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rbx95,Uid:4016fd87-bbde-4b87-b5ce-74cf26580b86,Namespace:kube-system,Attempt:0,} returns sandbox id \"eadb9570220a52e96d5aab2b9f5ffe23ca50287c666663af31713a64928c5d4c\"" Feb 9 18:24:25.523835 kubelet[1976]: E0209 18:24:25.523819 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:25.528458 env[1143]: time="2024-02-09T18:24:25.528410333Z" level=info msg="CreateContainer within sandbox \"e49f592de95e698117bbce26f9e6f6de3797dc95f9eeafa32fc60f7ce73e1490\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:24:25.531630 env[1143]: time="2024-02-09T18:24:25.531590179Z" level=info msg="CreateContainer within sandbox \"eadb9570220a52e96d5aab2b9f5ffe23ca50287c666663af31713a64928c5d4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:24:25.542947 env[1143]: time="2024-02-09T18:24:25.542903262Z" level=info msg="CreateContainer within sandbox \"e49f592de95e698117bbce26f9e6f6de3797dc95f9eeafa32fc60f7ce73e1490\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63366dfa22db1314f931b9ed1cef29984e2e3bbd42ddc70bb9c8e9062f05894a\"" Feb 9 18:24:25.544708 env[1143]: time="2024-02-09T18:24:25.544673408Z" level=info msg="StartContainer for \"63366dfa22db1314f931b9ed1cef29984e2e3bbd42ddc70bb9c8e9062f05894a\"" Feb 9 18:24:25.547817 env[1143]: time="2024-02-09T18:24:25.547767993Z" level=info msg="CreateContainer within sandbox \"eadb9570220a52e96d5aab2b9f5ffe23ca50287c666663af31713a64928c5d4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e63bba9ebeb719690609cb25266cf37e9e03f86fc41c2990ff758654137ecfe\"" Feb 9 18:24:25.548453 env[1143]: time="2024-02-09T18:24:25.548419750Z" level=info msg="StartContainer for \"8e63bba9ebeb719690609cb25266cf37e9e03f86fc41c2990ff758654137ecfe\"" Feb 9 18:24:25.566599 systemd[1]: Started cri-containerd-63366dfa22db1314f931b9ed1cef29984e2e3bbd42ddc70bb9c8e9062f05894a.scope. Feb 9 18:24:25.573031 systemd[1]: Started cri-containerd-8e63bba9ebeb719690609cb25266cf37e9e03f86fc41c2990ff758654137ecfe.scope. Feb 9 18:24:25.644495 env[1143]: time="2024-02-09T18:24:25.644430664Z" level=info msg="StartContainer for \"8e63bba9ebeb719690609cb25266cf37e9e03f86fc41c2990ff758654137ecfe\" returns successfully" Feb 9 18:24:25.645001 env[1143]: time="2024-02-09T18:24:25.644947629Z" level=info msg="StartContainer for \"63366dfa22db1314f931b9ed1cef29984e2e3bbd42ddc70bb9c8e9062f05894a\" returns successfully" Feb 9 18:24:25.948783 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:40332.service. Feb 9 18:24:25.993955 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 40332 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:25.995249 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:25.998624 systemd-logind[1131]: New session 7 of user core. Feb 9 18:24:25.999513 systemd[1]: Started session-7.scope. Feb 9 18:24:26.117757 sshd[3359]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:26.120366 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:40332.service: Deactivated successfully. Feb 9 18:24:26.121131 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:24:26.121650 systemd-logind[1131]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:24:26.122303 systemd-logind[1131]: Removed session 7. Feb 9 18:24:26.286926 kubelet[1976]: E0209 18:24:26.286881 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:26.290401 kubelet[1976]: E0209 18:24:26.290382 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:26.300371 kubelet[1976]: I0209 18:24:26.300319 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rbx95" podStartSLOduration=19.300279 podCreationTimestamp="2024-02-09 18:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:24:26.299960646 +0000 UTC m=+33.201900963" watchObservedRunningTime="2024-02-09 18:24:26.300279 +0000 UTC m=+33.202219317" Feb 9 18:24:26.320294 kubelet[1976]: I0209 18:24:26.320237 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-l9nc4" podStartSLOduration=19.32020125 podCreationTimestamp="2024-02-09 18:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:24:26.308479034 +0000 UTC m=+33.210419351" watchObservedRunningTime="2024-02-09 18:24:26.32020125 +0000 UTC m=+33.222141527" Feb 9 18:24:27.292026 kubelet[1976]: E0209 18:24:27.291979 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:27.292344 kubelet[1976]: E0209 18:24:27.292148 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:28.293358 kubelet[1976]: E0209 18:24:28.293327 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:28.293756 kubelet[1976]: E0209 18:24:28.293428 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:24:31.125017 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:40338.service. Feb 9 18:24:31.169008 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 40338 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:31.170301 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:31.174126 systemd-logind[1131]: New session 8 of user core. Feb 9 18:24:31.175980 systemd[1]: Started session-8.scope. Feb 9 18:24:31.291779 sshd[3380]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:31.294000 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:24:31.294556 systemd-logind[1131]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:24:31.294675 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:40338.service: Deactivated successfully. Feb 9 18:24:31.295613 systemd-logind[1131]: Removed session 8. Feb 9 18:24:36.297731 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:51078.service. Feb 9 18:24:36.361564 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 51078 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:36.363069 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:36.366344 systemd-logind[1131]: New session 9 of user core. Feb 9 18:24:36.367213 systemd[1]: Started session-9.scope. Feb 9 18:24:36.474826 sshd[3395]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:36.477831 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:51078.service: Deactivated successfully. Feb 9 18:24:36.478471 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:24:36.479034 systemd-logind[1131]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:24:36.480107 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:51080.service. Feb 9 18:24:36.480803 systemd-logind[1131]: Removed session 9. Feb 9 18:24:36.523125 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 51080 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:36.524418 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:36.527599 systemd-logind[1131]: New session 10 of user core. Feb 9 18:24:36.528531 systemd[1]: Started session-10.scope. Feb 9 18:24:37.277183 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:51094.service. Feb 9 18:24:37.277665 sshd[3410]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:37.284011 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:51080.service: Deactivated successfully. Feb 9 18:24:37.285595 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:24:37.289218 systemd-logind[1131]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:24:37.292013 systemd-logind[1131]: Removed session 10. Feb 9 18:24:37.326215 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 51094 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:37.327611 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:37.331564 systemd-logind[1131]: New session 11 of user core. Feb 9 18:24:37.331984 systemd[1]: Started session-11.scope. Feb 9 18:24:37.460294 sshd[3420]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:37.463261 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:51094.service: Deactivated successfully. Feb 9 18:24:37.464024 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:24:37.464572 systemd-logind[1131]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:24:37.465440 systemd-logind[1131]: Removed session 11. Feb 9 18:24:42.466106 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:51100.service. Feb 9 18:24:42.508438 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 51100 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:42.509639 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:42.513272 systemd-logind[1131]: New session 12 of user core. Feb 9 18:24:42.513780 systemd[1]: Started session-12.scope. Feb 9 18:24:42.625623 sshd[3437]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:42.629819 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:53016.service. Feb 9 18:24:42.630396 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:51100.service: Deactivated successfully. Feb 9 18:24:42.631066 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:24:42.631615 systemd-logind[1131]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:24:42.632404 systemd-logind[1131]: Removed session 12. Feb 9 18:24:42.672402 sshd[3449]: Accepted publickey for core from 10.0.0.1 port 53016 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:42.673578 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:42.676633 systemd-logind[1131]: New session 13 of user core. Feb 9 18:24:42.677544 systemd[1]: Started session-13.scope. Feb 9 18:24:42.861403 sshd[3449]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:42.865114 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:53032.service. Feb 9 18:24:42.865615 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:53016.service: Deactivated successfully. Feb 9 18:24:42.866429 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:24:42.867089 systemd-logind[1131]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:24:42.869128 systemd-logind[1131]: Removed session 13. Feb 9 18:24:42.918420 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 53032 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:42.919747 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:42.923792 systemd-logind[1131]: New session 14 of user core. Feb 9 18:24:42.924708 systemd[1]: Started session-14.scope. Feb 9 18:24:43.701477 sshd[3460]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:43.705687 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:53046.service. Feb 9 18:24:43.706777 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:53032.service: Deactivated successfully. Feb 9 18:24:43.707460 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:24:43.708853 systemd-logind[1131]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:24:43.709632 systemd-logind[1131]: Removed session 14. Feb 9 18:24:43.755191 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 53046 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:43.756483 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:43.760423 systemd-logind[1131]: New session 15 of user core. Feb 9 18:24:43.761362 systemd[1]: Started session-15.scope. Feb 9 18:24:44.058559 sshd[3481]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:44.062362 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:53048.service. Feb 9 18:24:44.065295 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:24:44.065897 systemd-logind[1131]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:24:44.066057 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:53046.service: Deactivated successfully. Feb 9 18:24:44.067119 systemd-logind[1131]: Removed session 15. Feb 9 18:24:44.112795 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 53048 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:44.114142 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:44.117507 systemd-logind[1131]: New session 16 of user core. Feb 9 18:24:44.118692 systemd[1]: Started session-16.scope. Feb 9 18:24:44.234209 sshd[3493]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:44.236722 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:53048.service: Deactivated successfully. Feb 9 18:24:44.237491 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:24:44.238021 systemd-logind[1131]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:24:44.238654 systemd-logind[1131]: Removed session 16. Feb 9 18:24:49.239402 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:53062.service. Feb 9 18:24:49.282545 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 53062 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:49.284147 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:49.287920 systemd-logind[1131]: New session 17 of user core. Feb 9 18:24:49.288683 systemd[1]: Started session-17.scope. Feb 9 18:24:49.404498 sshd[3507]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:49.407051 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:53062.service: Deactivated successfully. Feb 9 18:24:49.407771 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:24:49.408401 systemd-logind[1131]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:24:49.408997 systemd-logind[1131]: Removed session 17. Feb 9 18:24:54.410346 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:42816.service. Feb 9 18:24:54.453276 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 42816 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:54.454522 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:54.457831 systemd-logind[1131]: New session 18 of user core. Feb 9 18:24:54.458762 systemd[1]: Started session-18.scope. Feb 9 18:24:54.568710 sshd[3527]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:54.571763 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:42816.service: Deactivated successfully. Feb 9 18:24:54.572651 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:24:54.573223 systemd-logind[1131]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:24:54.573888 systemd-logind[1131]: Removed session 18. Feb 9 18:24:59.571916 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:42822.service. Feb 9 18:24:59.616210 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 42822 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:24:59.617876 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:24:59.621401 systemd-logind[1131]: New session 19 of user core. Feb 9 18:24:59.622251 systemd[1]: Started session-19.scope. Feb 9 18:24:59.729148 sshd[3540]: pam_unix(sshd:session): session closed for user core Feb 9 18:24:59.732036 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:42822.service: Deactivated successfully. Feb 9 18:24:59.732903 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:24:59.733401 systemd-logind[1131]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:24:59.734115 systemd-logind[1131]: Removed session 19. Feb 9 18:25:04.733330 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:42894.service. Feb 9 18:25:04.775728 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 42894 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:25:04.777182 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:25:04.780963 systemd-logind[1131]: New session 20 of user core. Feb 9 18:25:04.781738 systemd[1]: Started session-20.scope. Feb 9 18:25:04.889159 sshd[3554]: pam_unix(sshd:session): session closed for user core Feb 9 18:25:04.892986 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:42896.service. Feb 9 18:25:04.893513 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:42894.service: Deactivated successfully. Feb 9 18:25:04.894190 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:25:04.894791 systemd-logind[1131]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:25:04.895553 systemd-logind[1131]: Removed session 20. Feb 9 18:25:04.936326 sshd[3566]: Accepted publickey for core from 10.0.0.1 port 42896 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:25:04.937722 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:25:04.940754 systemd-logind[1131]: New session 21 of user core. Feb 9 18:25:04.941581 systemd[1]: Started session-21.scope. Feb 9 18:25:07.021768 env[1143]: time="2024-02-09T18:25:07.021719279Z" level=info msg="StopContainer for \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\" with timeout 30 (s)" Feb 9 18:25:07.022175 env[1143]: time="2024-02-09T18:25:07.022028573Z" level=info msg="Stop container \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\" with signal terminated" Feb 9 18:25:07.034236 systemd[1]: cri-containerd-0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0.scope: Deactivated successfully. Feb 9 18:25:07.059174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0-rootfs.mount: Deactivated successfully. Feb 9 18:25:07.067591 env[1143]: time="2024-02-09T18:25:07.067276573Z" level=info msg="shim disconnected" id=0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0 Feb 9 18:25:07.067591 env[1143]: time="2024-02-09T18:25:07.067327448Z" level=warning msg="cleaning up after shim disconnected" id=0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0 namespace=k8s.io Feb 9 18:25:07.067591 env[1143]: time="2024-02-09T18:25:07.067338607Z" level=info msg="cleaning up dead shim" Feb 9 18:25:07.068005 env[1143]: time="2024-02-09T18:25:07.067958354Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:25:07.073194 env[1143]: time="2024-02-09T18:25:07.073146869Z" level=info msg="StopContainer for \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\" with timeout 2 (s)" Feb 9 18:25:07.073463 env[1143]: time="2024-02-09T18:25:07.073422686Z" level=info msg="Stop container \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\" with signal terminated" Feb 9 18:25:07.074682 env[1143]: time="2024-02-09T18:25:07.074631342Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:25:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3614 runtime=io.containerd.runc.v2\n" Feb 9 18:25:07.076898 env[1143]: time="2024-02-09T18:25:07.076865470Z" level=info msg="StopContainer for \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\" returns successfully" Feb 9 18:25:07.077435 env[1143]: time="2024-02-09T18:25:07.077396905Z" level=info msg="StopPodSandbox for \"06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4\"" Feb 9 18:25:07.077480 env[1143]: time="2024-02-09T18:25:07.077458659Z" level=info msg="Container to stop \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:25:07.078901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4-shm.mount: Deactivated successfully. Feb 9 18:25:07.080579 systemd-networkd[1039]: lxc_health: Link DOWN Feb 9 18:25:07.080585 systemd-networkd[1039]: lxc_health: Lost carrier Feb 9 18:25:07.088768 systemd[1]: cri-containerd-06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4.scope: Deactivated successfully. Feb 9 18:25:07.110264 systemd[1]: cri-containerd-97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2.scope: Deactivated successfully. Feb 9 18:25:07.110575 systemd[1]: cri-containerd-97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2.scope: Consumed 6.454s CPU time. Feb 9 18:25:07.116887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4-rootfs.mount: Deactivated successfully. Feb 9 18:25:07.122436 env[1143]: time="2024-02-09T18:25:07.122384687Z" level=info msg="shim disconnected" id=06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4 Feb 9 18:25:07.122436 env[1143]: time="2024-02-09T18:25:07.122429963Z" level=warning msg="cleaning up after shim disconnected" id=06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4 namespace=k8s.io Feb 9 18:25:07.122436 env[1143]: time="2024-02-09T18:25:07.122439882Z" level=info msg="cleaning up dead shim" Feb 9 18:25:07.130633 env[1143]: time="2024-02-09T18:25:07.130584824Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:25:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3669 runtime=io.containerd.runc.v2\n" Feb 9 18:25:07.131003 env[1143]: time="2024-02-09T18:25:07.130973750Z" level=info msg="TearDown network for sandbox \"06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4\" successfully" Feb 9 18:25:07.131050 env[1143]: time="2024-02-09T18:25:07.131002508Z" level=info msg="StopPodSandbox for \"06a0db7f74b59ceead01857d526d98677b69f8cf806d4377316966bd39a6e3d4\" returns successfully" Feb 9 18:25:07.131078 env[1143]: time="2024-02-09T18:25:07.131050024Z" level=info msg="shim disconnected" id=97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2 Feb 9 18:25:07.131103 env[1143]: time="2024-02-09T18:25:07.131087221Z" level=warning msg="cleaning up after shim disconnected" id=97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2 namespace=k8s.io Feb 9 18:25:07.131103 env[1143]: time="2024-02-09T18:25:07.131096820Z" level=info msg="cleaning up dead shim" Feb 9 18:25:07.139212 env[1143]: time="2024-02-09T18:25:07.139169688Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:25:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3682 runtime=io.containerd.runc.v2\n" Feb 9 18:25:07.142898 env[1143]: time="2024-02-09T18:25:07.142834213Z" level=info msg="StopContainer for \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\" returns successfully" Feb 9 18:25:07.143572 env[1143]: time="2024-02-09T18:25:07.143533233Z" level=info msg="StopPodSandbox for \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\"" Feb 9 18:25:07.143644 env[1143]: time="2024-02-09T18:25:07.143599228Z" level=info msg="Container to stop \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:25:07.143644 env[1143]: time="2024-02-09T18:25:07.143613267Z" level=info msg="Container to stop \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:25:07.143644 env[1143]: time="2024-02-09T18:25:07.143627465Z" level=info msg="Container to stop \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:25:07.143644 env[1143]: time="2024-02-09T18:25:07.143638864Z" level=info msg="Container to stop \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:25:07.144200 env[1143]: time="2024-02-09T18:25:07.143650223Z" level=info msg="Container to stop \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:25:07.150224 systemd[1]: cri-containerd-4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70.scope: Deactivated successfully. Feb 9 18:25:07.174412 env[1143]: time="2024-02-09T18:25:07.174365909Z" level=info msg="shim disconnected" id=4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70 Feb 9 18:25:07.174877 env[1143]: time="2024-02-09T18:25:07.174830950Z" level=warning msg="cleaning up after shim disconnected" id=4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70 namespace=k8s.io Feb 9 18:25:07.174987 env[1143]: time="2024-02-09T18:25:07.174970498Z" level=info msg="cleaning up dead shim" Feb 9 18:25:07.186948 env[1143]: time="2024-02-09T18:25:07.186904434Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:25:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3712 runtime=io.containerd.runc.v2\n" Feb 9 18:25:07.187369 env[1143]: time="2024-02-09T18:25:07.187338757Z" level=info msg="TearDown network for sandbox \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" successfully" Feb 9 18:25:07.187463 env[1143]: time="2024-02-09T18:25:07.187445788Z" level=info msg="StopPodSandbox for \"4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70\" returns successfully" Feb 9 18:25:07.200975 kubelet[1976]: E0209 18:25:07.200930 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:07.335082 kubelet[1976]: I0209 18:25:07.333965 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-etc-cni-netd\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335082 kubelet[1976]: I0209 18:25:07.334007 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-lib-modules\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335082 kubelet[1976]: I0209 18:25:07.334030 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cilium-run\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335082 kubelet[1976]: I0209 18:25:07.334046 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-host-proc-sys-net\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335082 kubelet[1976]: I0209 18:25:07.334073 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qhqv\" (UniqueName: \"kubernetes.io/projected/0c439772-55db-4970-811b-a36c747777e4-kube-api-access-2qhqv\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335082 kubelet[1976]: I0209 18:25:07.334092 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-host-proc-sys-kernel\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335331 kubelet[1976]: I0209 18:25:07.334113 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e671890d-424b-45f7-b3f7-99e54cbfc07e-cilium-config-path\") pod \"e671890d-424b-45f7-b3f7-99e54cbfc07e\" (UID: \"e671890d-424b-45f7-b3f7-99e54cbfc07e\") " Feb 9 18:25:07.335331 kubelet[1976]: I0209 18:25:07.334134 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c439772-55db-4970-811b-a36c747777e4-clustermesh-secrets\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335331 kubelet[1976]: I0209 18:25:07.334150 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-hostproc\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335331 kubelet[1976]: I0209 18:25:07.334168 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c439772-55db-4970-811b-a36c747777e4-hubble-tls\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335331 kubelet[1976]: I0209 18:25:07.334189 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c439772-55db-4970-811b-a36c747777e4-cilium-config-path\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335331 kubelet[1976]: I0209 18:25:07.334208 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zckrb\" (UniqueName: \"kubernetes.io/projected/e671890d-424b-45f7-b3f7-99e54cbfc07e-kube-api-access-zckrb\") pod \"e671890d-424b-45f7-b3f7-99e54cbfc07e\" (UID: \"e671890d-424b-45f7-b3f7-99e54cbfc07e\") " Feb 9 18:25:07.335464 kubelet[1976]: I0209 18:25:07.334226 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cni-path\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335464 kubelet[1976]: I0209 18:25:07.334243 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-bpf-maps\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335464 kubelet[1976]: I0209 18:25:07.334260 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-xtables-lock\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335464 kubelet[1976]: I0209 18:25:07.334292 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cilium-cgroup\") pod \"0c439772-55db-4970-811b-a36c747777e4\" (UID: \"0c439772-55db-4970-811b-a36c747777e4\") " Feb 9 18:25:07.335464 kubelet[1976]: I0209 18:25:07.334667 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.335464 kubelet[1976]: I0209 18:25:07.334685 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.335592 kubelet[1976]: I0209 18:25:07.334697 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-hostproc" (OuterVolumeSpecName: "hostproc") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.335592 kubelet[1976]: I0209 18:25:07.334664 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.335592 kubelet[1976]: I0209 18:25:07.334727 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.335592 kubelet[1976]: I0209 18:25:07.334741 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cni-path" (OuterVolumeSpecName: "cni-path") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.335868 kubelet[1976]: I0209 18:25:07.335738 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.335868 kubelet[1976]: I0209 18:25:07.335733 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.336039 kubelet[1976]: I0209 18:25:07.336004 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.336039 kubelet[1976]: I0209 18:25:07.336040 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:07.338120 kubelet[1976]: I0209 18:25:07.336515 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c439772-55db-4970-811b-a36c747777e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:25:07.338120 kubelet[1976]: I0209 18:25:07.338043 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e671890d-424b-45f7-b3f7-99e54cbfc07e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e671890d-424b-45f7-b3f7-99e54cbfc07e" (UID: "e671890d-424b-45f7-b3f7-99e54cbfc07e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:25:07.338832 kubelet[1976]: I0209 18:25:07.338794 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c439772-55db-4970-811b-a36c747777e4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:25:07.338936 kubelet[1976]: I0209 18:25:07.338825 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c439772-55db-4970-811b-a36c747777e4-kube-api-access-2qhqv" (OuterVolumeSpecName: "kube-api-access-2qhqv") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "kube-api-access-2qhqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:25:07.339091 kubelet[1976]: I0209 18:25:07.339055 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e671890d-424b-45f7-b3f7-99e54cbfc07e-kube-api-access-zckrb" (OuterVolumeSpecName: "kube-api-access-zckrb") pod "e671890d-424b-45f7-b3f7-99e54cbfc07e" (UID: "e671890d-424b-45f7-b3f7-99e54cbfc07e"). InnerVolumeSpecName "kube-api-access-zckrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:25:07.340634 kubelet[1976]: I0209 18:25:07.340606 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c439772-55db-4970-811b-a36c747777e4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0c439772-55db-4970-811b-a36c747777e4" (UID: "0c439772-55db-4970-811b-a36c747777e4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:25:07.374664 kubelet[1976]: I0209 18:25:07.374635 1976 scope.go:117] "RemoveContainer" containerID="97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2" Feb 9 18:25:07.379149 env[1143]: time="2024-02-09T18:25:07.379106672Z" level=info msg="RemoveContainer for \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\"" Feb 9 18:25:07.380591 systemd[1]: Removed slice kubepods-besteffort-pode671890d_424b_45f7_b3f7_99e54cbfc07e.slice. Feb 9 18:25:07.382360 env[1143]: time="2024-02-09T18:25:07.382319677Z" level=info msg="RemoveContainer for \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\" returns successfully" Feb 9 18:25:07.382593 kubelet[1976]: I0209 18:25:07.382559 1976 scope.go:117] "RemoveContainer" containerID="848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c" Feb 9 18:25:07.383798 systemd[1]: Removed slice kubepods-burstable-pod0c439772_55db_4970_811b_a36c747777e4.slice. Feb 9 18:25:07.383903 systemd[1]: kubepods-burstable-pod0c439772_55db_4970_811b_a36c747777e4.slice: Consumed 6.756s CPU time. Feb 9 18:25:07.385981 env[1143]: time="2024-02-09T18:25:07.385762502Z" level=info msg="RemoveContainer for \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\"" Feb 9 18:25:07.389073 env[1143]: time="2024-02-09T18:25:07.388891633Z" level=info msg="RemoveContainer for \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\" returns successfully" Feb 9 18:25:07.389466 kubelet[1976]: I0209 18:25:07.389421 1976 scope.go:117] "RemoveContainer" containerID="fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d" Feb 9 18:25:07.391598 env[1143]: time="2024-02-09T18:25:07.391566444Z" level=info msg="RemoveContainer for \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\"" Feb 9 18:25:07.394717 env[1143]: time="2024-02-09T18:25:07.394512231Z" level=info msg="RemoveContainer for \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\" returns successfully" Feb 9 18:25:07.394815 kubelet[1976]: I0209 18:25:07.394684 1976 scope.go:117] "RemoveContainer" containerID="664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8" Feb 9 18:25:07.396315 env[1143]: time="2024-02-09T18:25:07.395717568Z" level=info msg="RemoveContainer for \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\"" Feb 9 18:25:07.399131 env[1143]: time="2024-02-09T18:25:07.399098318Z" level=info msg="RemoveContainer for \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\" returns successfully" Feb 9 18:25:07.399314 kubelet[1976]: I0209 18:25:07.399281 1976 scope.go:117] "RemoveContainer" containerID="9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403" Feb 9 18:25:07.401111 env[1143]: time="2024-02-09T18:25:07.401084268Z" level=info msg="RemoveContainer for \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\"" Feb 9 18:25:07.403134 env[1143]: time="2024-02-09T18:25:07.403104335Z" level=info msg="RemoveContainer for \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\" returns successfully" Feb 9 18:25:07.403312 kubelet[1976]: I0209 18:25:07.403295 1976 scope.go:117] "RemoveContainer" containerID="97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2" Feb 9 18:25:07.403529 env[1143]: time="2024-02-09T18:25:07.403465224Z" level=error msg="ContainerStatus for \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\": not found" Feb 9 18:25:07.403660 kubelet[1976]: E0209 18:25:07.403644 1976 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\": not found" containerID="97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2" Feb 9 18:25:07.403921 kubelet[1976]: I0209 18:25:07.403904 1976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2"} err="failed to get container status \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2\": not found" Feb 9 18:25:07.403956 kubelet[1976]: I0209 18:25:07.403930 1976 scope.go:117] "RemoveContainer" containerID="848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c" Feb 9 18:25:07.404171 env[1143]: time="2024-02-09T18:25:07.404120408Z" level=error msg="ContainerStatus for \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\": not found" Feb 9 18:25:07.404314 kubelet[1976]: E0209 18:25:07.404299 1976 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\": not found" containerID="848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c" Feb 9 18:25:07.404367 kubelet[1976]: I0209 18:25:07.404333 1976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c"} err="failed to get container status \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"848ab7a5d59caa8353f89949e0009fb887f0d0fcac9ac237855bc84dbf4a3e8c\": not found" Feb 9 18:25:07.404367 kubelet[1976]: I0209 18:25:07.404343 1976 scope.go:117] "RemoveContainer" containerID="fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d" Feb 9 18:25:07.404535 env[1143]: time="2024-02-09T18:25:07.404469658Z" level=error msg="ContainerStatus for \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\": not found" Feb 9 18:25:07.404636 kubelet[1976]: E0209 18:25:07.404619 1976 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\": not found" containerID="fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d" Feb 9 18:25:07.404681 kubelet[1976]: I0209 18:25:07.404661 1976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d"} err="failed to get container status \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe2c488f632ca45f9c1f7e2937748f3bf2c417025974ac1598a45e4a4c8e065d\": not found" Feb 9 18:25:07.404681 kubelet[1976]: I0209 18:25:07.404676 1976 scope.go:117] "RemoveContainer" containerID="664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8" Feb 9 18:25:07.404829 env[1143]: time="2024-02-09T18:25:07.404790950Z" level=error msg="ContainerStatus for \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\": not found" Feb 9 18:25:07.404949 kubelet[1976]: E0209 18:25:07.404936 1976 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\": not found" containerID="664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8" Feb 9 18:25:07.404994 kubelet[1976]: I0209 18:25:07.404974 1976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8"} err="failed to get container status \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"664f20d37c9caf106392e3ba51808b0ecbc2ff3f2024eaca4c08e969db8ed1d8\": not found" Feb 9 18:25:07.404994 kubelet[1976]: I0209 18:25:07.404984 1976 scope.go:117] "RemoveContainer" containerID="9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403" Feb 9 18:25:07.405132 env[1143]: time="2024-02-09T18:25:07.405096364Z" level=error msg="ContainerStatus for \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\": not found" Feb 9 18:25:07.405230 kubelet[1976]: E0209 18:25:07.405217 1976 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\": not found" containerID="9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403" Feb 9 18:25:07.405269 kubelet[1976]: I0209 18:25:07.405239 1976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403"} err="failed to get container status \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bd8adb3cc435e00fbe0fda6d715a5dd5e93ed6e39e9009499a9eb99a2a8f403\": not found" Feb 9 18:25:07.405269 kubelet[1976]: I0209 18:25:07.405247 1976 scope.go:117] "RemoveContainer" containerID="0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0" Feb 9 18:25:07.405981 env[1143]: time="2024-02-09T18:25:07.405960450Z" level=info msg="RemoveContainer for \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\"" Feb 9 18:25:07.408233 env[1143]: time="2024-02-09T18:25:07.408196818Z" level=info msg="RemoveContainer for \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\" returns successfully" Feb 9 18:25:07.408378 kubelet[1976]: I0209 18:25:07.408358 1976 scope.go:117] "RemoveContainer" containerID="0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0" Feb 9 18:25:07.408555 env[1143]: time="2024-02-09T18:25:07.408503992Z" level=error msg="ContainerStatus for \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\": not found" Feb 9 18:25:07.408658 kubelet[1976]: E0209 18:25:07.408640 1976 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\": not found" containerID="0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0" Feb 9 18:25:07.408699 kubelet[1976]: I0209 18:25:07.408670 1976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0"} err="failed to get container status \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"0be25da8cb2b93efd1985b512ec125893b6ef61bac06d943cb870bb5491f49c0\": not found" Feb 9 18:25:07.434995 kubelet[1976]: I0209 18:25:07.434970 1976 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2qhqv\" (UniqueName: \"kubernetes.io/projected/0c439772-55db-4970-811b-a36c747777e4-kube-api-access-2qhqv\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435129 kubelet[1976]: I0209 18:25:07.435117 1976 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435205 kubelet[1976]: I0209 18:25:07.435196 1976 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e671890d-424b-45f7-b3f7-99e54cbfc07e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435275 kubelet[1976]: I0209 18:25:07.435267 1976 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c439772-55db-4970-811b-a36c747777e4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435347 kubelet[1976]: I0209 18:25:07.435329 1976 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435408 kubelet[1976]: I0209 18:25:07.435398 1976 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c439772-55db-4970-811b-a36c747777e4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435482 kubelet[1976]: I0209 18:25:07.435473 1976 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c439772-55db-4970-811b-a36c747777e4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435557 kubelet[1976]: I0209 18:25:07.435549 1976 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zckrb\" (UniqueName: \"kubernetes.io/projected/e671890d-424b-45f7-b3f7-99e54cbfc07e-kube-api-access-zckrb\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435632 kubelet[1976]: I0209 18:25:07.435622 1976 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435704 kubelet[1976]: I0209 18:25:07.435686 1976 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435775 kubelet[1976]: I0209 18:25:07.435760 1976 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435856 kubelet[1976]: I0209 18:25:07.435829 1976 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.435936 kubelet[1976]: I0209 18:25:07.435925 1976 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.436021 kubelet[1976]: I0209 18:25:07.436011 1976 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.436091 kubelet[1976]: I0209 18:25:07.436083 1976 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:07.436163 kubelet[1976]: I0209 18:25:07.436144 1976 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c439772-55db-4970-811b-a36c747777e4-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:08.030757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97f9825ed579632f1bdd936b3b9dbf2af745b46a0744c54c89be5acd20dae2e2-rootfs.mount: Deactivated successfully. Feb 9 18:25:08.030892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70-rootfs.mount: Deactivated successfully. Feb 9 18:25:08.030965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4df2d07b7cdd8bd55aec9a8a21700e19e8b8ce359a17b5eada74611fce3e1d70-shm.mount: Deactivated successfully. Feb 9 18:25:08.031024 systemd[1]: var-lib-kubelet-pods-e671890d\x2d424b\x2d45f7\x2db3f7\x2d99e54cbfc07e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzckrb.mount: Deactivated successfully. Feb 9 18:25:08.031086 systemd[1]: var-lib-kubelet-pods-0c439772\x2d55db\x2d4970\x2d811b\x2da36c747777e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2qhqv.mount: Deactivated successfully. Feb 9 18:25:08.031139 systemd[1]: var-lib-kubelet-pods-0c439772\x2d55db\x2d4970\x2d811b\x2da36c747777e4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:25:08.031190 systemd[1]: var-lib-kubelet-pods-0c439772\x2d55db\x2d4970\x2d811b\x2da36c747777e4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:25:08.266922 kubelet[1976]: E0209 18:25:08.266890 1976 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:25:08.985011 sshd[3566]: pam_unix(sshd:session): session closed for user core Feb 9 18:25:08.988789 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:42900.service. Feb 9 18:25:08.989288 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:42896.service: Deactivated successfully. Feb 9 18:25:08.990062 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:25:08.990243 systemd[1]: session-21.scope: Consumed 1.412s CPU time. Feb 9 18:25:08.990708 systemd-logind[1131]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:25:08.991632 systemd-logind[1131]: Removed session 21. Feb 9 18:25:09.033344 sshd[3731]: Accepted publickey for core from 10.0.0.1 port 42900 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:25:09.034742 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:25:09.038326 systemd-logind[1131]: New session 22 of user core. Feb 9 18:25:09.038918 systemd[1]: Started session-22.scope. Feb 9 18:25:09.202885 kubelet[1976]: I0209 18:25:09.202857 1976 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0c439772-55db-4970-811b-a36c747777e4" path="/var/lib/kubelet/pods/0c439772-55db-4970-811b-a36c747777e4/volumes" Feb 9 18:25:09.203584 kubelet[1976]: I0209 18:25:09.203566 1976 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e671890d-424b-45f7-b3f7-99e54cbfc07e" path="/var/lib/kubelet/pods/e671890d-424b-45f7-b3f7-99e54cbfc07e/volumes" Feb 9 18:25:10.228544 sshd[3731]: pam_unix(sshd:session): session closed for user core Feb 9 18:25:10.231410 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:42900.service: Deactivated successfully. Feb 9 18:25:10.232040 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:25:10.232215 systemd[1]: session-22.scope: Consumed 1.099s CPU time. Feb 9 18:25:10.232749 systemd-logind[1131]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:25:10.233963 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:42908.service. Feb 9 18:25:10.237138 systemd-logind[1131]: Removed session 22. Feb 9 18:25:10.239975 kubelet[1976]: I0209 18:25:10.239935 1976 topology_manager.go:215] "Topology Admit Handler" podUID="b5ab2218-4099-484e-a3d6-ccbd515590b3" podNamespace="kube-system" podName="cilium-cff7w" Feb 9 18:25:10.240224 kubelet[1976]: E0209 18:25:10.239987 1976 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c439772-55db-4970-811b-a36c747777e4" containerName="mount-cgroup" Feb 9 18:25:10.240224 kubelet[1976]: E0209 18:25:10.239996 1976 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c439772-55db-4970-811b-a36c747777e4" containerName="clean-cilium-state" Feb 9 18:25:10.240224 kubelet[1976]: E0209 18:25:10.240004 1976 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c439772-55db-4970-811b-a36c747777e4" containerName="cilium-agent" Feb 9 18:25:10.240224 kubelet[1976]: E0209 18:25:10.240012 1976 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c439772-55db-4970-811b-a36c747777e4" containerName="apply-sysctl-overwrites" Feb 9 18:25:10.240224 kubelet[1976]: E0209 18:25:10.240018 1976 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e671890d-424b-45f7-b3f7-99e54cbfc07e" containerName="cilium-operator" Feb 9 18:25:10.240224 kubelet[1976]: E0209 18:25:10.240024 1976 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c439772-55db-4970-811b-a36c747777e4" containerName="mount-bpf-fs" Feb 9 18:25:10.240224 kubelet[1976]: I0209 18:25:10.240044 1976 memory_manager.go:346] "RemoveStaleState removing state" podUID="e671890d-424b-45f7-b3f7-99e54cbfc07e" containerName="cilium-operator" Feb 9 18:25:10.240224 kubelet[1976]: I0209 18:25:10.240050 1976 memory_manager.go:346] "RemoveStaleState removing state" podUID="0c439772-55db-4970-811b-a36c747777e4" containerName="cilium-agent" Feb 9 18:25:10.245352 systemd[1]: Created slice kubepods-burstable-podb5ab2218_4099_484e_a3d6_ccbd515590b3.slice. Feb 9 18:25:10.281286 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 42908 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:25:10.282589 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:25:10.286437 systemd-logind[1131]: New session 23 of user core. Feb 9 18:25:10.287343 systemd[1]: Started session-23.scope. Feb 9 18:25:10.349466 kubelet[1976]: I0209 18:25:10.349432 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-etc-cni-netd\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.349665 kubelet[1976]: I0209 18:25:10.349641 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5ab2218-4099-484e-a3d6-ccbd515590b3-clustermesh-secrets\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.349756 kubelet[1976]: I0209 18:25:10.349745 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-bpf-maps\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.349829 kubelet[1976]: I0209 18:25:10.349819 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-xtables-lock\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.349924 kubelet[1976]: I0209 18:25:10.349914 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbx66\" (UniqueName: \"kubernetes.io/projected/b5ab2218-4099-484e-a3d6-ccbd515590b3-kube-api-access-wbx66\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350045 kubelet[1976]: I0209 18:25:10.350008 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cni-path\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350086 kubelet[1976]: I0209 18:25:10.350051 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-run\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350086 kubelet[1976]: I0209 18:25:10.350079 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-hostproc\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350147 kubelet[1976]: I0209 18:25:10.350097 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-lib-modules\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350147 kubelet[1976]: I0209 18:25:10.350120 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-config-path\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350147 kubelet[1976]: I0209 18:25:10.350139 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-host-proc-sys-net\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350212 kubelet[1976]: I0209 18:25:10.350160 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-ipsec-secrets\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350212 kubelet[1976]: I0209 18:25:10.350178 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5ab2218-4099-484e-a3d6-ccbd515590b3-hubble-tls\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350212 kubelet[1976]: I0209 18:25:10.350197 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-cgroup\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.350276 kubelet[1976]: I0209 18:25:10.350217 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-host-proc-sys-kernel\") pod \"cilium-cff7w\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " pod="kube-system/cilium-cff7w" Feb 9 18:25:10.418878 sshd[3745]: pam_unix(sshd:session): session closed for user core Feb 9 18:25:10.422814 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:42924.service. Feb 9 18:25:10.423374 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:42908.service: Deactivated successfully. Feb 9 18:25:10.424180 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:25:10.424939 systemd-logind[1131]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:25:10.428107 kubelet[1976]: E0209 18:25:10.428083 1976 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-wbx66 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-cff7w" podUID="b5ab2218-4099-484e-a3d6-ccbd515590b3" Feb 9 18:25:10.428520 systemd-logind[1131]: Removed session 23. Feb 9 18:25:10.474504 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 42924 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:25:10.475217 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:25:10.480816 systemd[1]: Started session-24.scope. Feb 9 18:25:10.481319 systemd-logind[1131]: New session 24 of user core. Feb 9 18:25:11.556589 kubelet[1976]: I0209 18:25:11.556527 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-etc-cni-netd\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.556589 kubelet[1976]: I0209 18:25:11.556579 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-bpf-maps\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.556589 kubelet[1976]: I0209 18:25:11.556597 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-hostproc\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557006 kubelet[1976]: I0209 18:25:11.556614 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-lib-modules\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557006 kubelet[1976]: I0209 18:25:11.556635 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-host-proc-sys-kernel\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557006 kubelet[1976]: I0209 18:25:11.556659 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-config-path\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557006 kubelet[1976]: I0209 18:25:11.556676 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-host-proc-sys-net\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557006 kubelet[1976]: I0209 18:25:11.556696 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5ab2218-4099-484e-a3d6-ccbd515590b3-clustermesh-secrets\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557006 kubelet[1976]: I0209 18:25:11.556712 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-run\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557142 kubelet[1976]: I0209 18:25:11.556729 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cni-path\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557142 kubelet[1976]: I0209 18:25:11.556748 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-ipsec-secrets\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557142 kubelet[1976]: I0209 18:25:11.556765 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-xtables-lock\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557142 kubelet[1976]: I0209 18:25:11.556792 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbx66\" (UniqueName: \"kubernetes.io/projected/b5ab2218-4099-484e-a3d6-ccbd515590b3-kube-api-access-wbx66\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557142 kubelet[1976]: I0209 18:25:11.556817 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5ab2218-4099-484e-a3d6-ccbd515590b3-hubble-tls\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557142 kubelet[1976]: I0209 18:25:11.556833 1976 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-cgroup\") pod \"b5ab2218-4099-484e-a3d6-ccbd515590b3\" (UID: \"b5ab2218-4099-484e-a3d6-ccbd515590b3\") " Feb 9 18:25:11.557273 kubelet[1976]: I0209 18:25:11.556918 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.557273 kubelet[1976]: I0209 18:25:11.556944 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.557273 kubelet[1976]: I0209 18:25:11.556958 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.557273 kubelet[1976]: I0209 18:25:11.556973 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.557273 kubelet[1976]: I0209 18:25:11.556987 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.557381 kubelet[1976]: I0209 18:25:11.557002 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.558117 kubelet[1976]: I0209 18:25:11.557480 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.558117 kubelet[1976]: I0209 18:25:11.557902 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.558117 kubelet[1976]: I0209 18:25:11.557932 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.558117 kubelet[1976]: I0209 18:25:11.557950 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:25:11.558662 kubelet[1976]: I0209 18:25:11.558630 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:25:11.560471 kubelet[1976]: I0209 18:25:11.560442 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ab2218-4099-484e-a3d6-ccbd515590b3-kube-api-access-wbx66" (OuterVolumeSpecName: "kube-api-access-wbx66") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "kube-api-access-wbx66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:25:11.561148 systemd[1]: var-lib-kubelet-pods-b5ab2218\x2d4099\x2d484e\x2da3d6\x2dccbd515590b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwbx66.mount: Deactivated successfully. Feb 9 18:25:11.561245 systemd[1]: var-lib-kubelet-pods-b5ab2218\x2d4099\x2d484e\x2da3d6\x2dccbd515590b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:25:11.563052 kubelet[1976]: I0209 18:25:11.563029 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ab2218-4099-484e-a3d6-ccbd515590b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:25:11.563158 systemd[1]: var-lib-kubelet-pods-b5ab2218\x2d4099\x2d484e\x2da3d6\x2dccbd515590b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:25:11.564249 kubelet[1976]: I0209 18:25:11.563953 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:25:11.564465 kubelet[1976]: I0209 18:25:11.564418 1976 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ab2218-4099-484e-a3d6-ccbd515590b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b5ab2218-4099-484e-a3d6-ccbd515590b3" (UID: "b5ab2218-4099-484e-a3d6-ccbd515590b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:25:11.564696 systemd[1]: var-lib-kubelet-pods-b5ab2218\x2d4099\x2d484e\x2da3d6\x2dccbd515590b3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:25:11.657084 kubelet[1976]: I0209 18:25:11.657046 1976 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657247 kubelet[1976]: I0209 18:25:11.657235 1976 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wbx66\" (UniqueName: \"kubernetes.io/projected/b5ab2218-4099-484e-a3d6-ccbd515590b3-kube-api-access-wbx66\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657333 kubelet[1976]: I0209 18:25:11.657322 1976 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657391 kubelet[1976]: I0209 18:25:11.657383 1976 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657446 kubelet[1976]: I0209 18:25:11.657438 1976 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5ab2218-4099-484e-a3d6-ccbd515590b3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657501 kubelet[1976]: I0209 18:25:11.657493 1976 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657560 kubelet[1976]: I0209 18:25:11.657552 1976 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657628 kubelet[1976]: I0209 18:25:11.657618 1976 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657688 kubelet[1976]: I0209 18:25:11.657678 1976 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657744 kubelet[1976]: I0209 18:25:11.657734 1976 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657803 kubelet[1976]: I0209 18:25:11.657794 1976 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657883 kubelet[1976]: I0209 18:25:11.657873 1976 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.657952 kubelet[1976]: I0209 18:25:11.657943 1976 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.658013 kubelet[1976]: I0209 18:25:11.658005 1976 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5ab2218-4099-484e-a3d6-ccbd515590b3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:11.658068 kubelet[1976]: I0209 18:25:11.658059 1976 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5ab2218-4099-484e-a3d6-ccbd515590b3-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 18:25:12.201132 kubelet[1976]: E0209 18:25:12.201104 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:12.390529 systemd[1]: Removed slice kubepods-burstable-podb5ab2218_4099_484e_a3d6_ccbd515590b3.slice. Feb 9 18:25:12.418952 kubelet[1976]: I0209 18:25:12.418919 1976 topology_manager.go:215] "Topology Admit Handler" podUID="83c71dc2-86f7-48fb-b8ff-10e55f7b0627" podNamespace="kube-system" podName="cilium-d9x9p" Feb 9 18:25:12.423771 systemd[1]: Created slice kubepods-burstable-pod83c71dc2_86f7_48fb_b8ff_10e55f7b0627.slice. Feb 9 18:25:12.561798 kubelet[1976]: I0209 18:25:12.561756 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-cni-path\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.561798 kubelet[1976]: I0209 18:25:12.561804 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-clustermesh-secrets\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562161 kubelet[1976]: I0209 18:25:12.561827 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-host-proc-sys-kernel\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562161 kubelet[1976]: I0209 18:25:12.561901 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-hostproc\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562161 kubelet[1976]: I0209 18:25:12.561936 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-lib-modules\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562161 kubelet[1976]: I0209 18:25:12.562019 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-etc-cni-netd\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562161 kubelet[1976]: I0209 18:25:12.562052 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-xtables-lock\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562161 kubelet[1976]: I0209 18:25:12.562073 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4djw\" (UniqueName: \"kubernetes.io/projected/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-kube-api-access-s4djw\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562320 kubelet[1976]: I0209 18:25:12.562093 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-hubble-tls\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562320 kubelet[1976]: I0209 18:25:12.562114 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-cilium-config-path\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562320 kubelet[1976]: I0209 18:25:12.562148 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-cilium-run\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562320 kubelet[1976]: I0209 18:25:12.562175 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-bpf-maps\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562320 kubelet[1976]: I0209 18:25:12.562195 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-cilium-ipsec-secrets\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562320 kubelet[1976]: I0209 18:25:12.562215 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-host-proc-sys-net\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.562451 kubelet[1976]: I0209 18:25:12.562235 1976 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83c71dc2-86f7-48fb-b8ff-10e55f7b0627-cilium-cgroup\") pod \"cilium-d9x9p\" (UID: \"83c71dc2-86f7-48fb-b8ff-10e55f7b0627\") " pod="kube-system/cilium-d9x9p" Feb 9 18:25:12.726459 kubelet[1976]: E0209 18:25:12.726425 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:12.728358 env[1143]: time="2024-02-09T18:25:12.728138130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d9x9p,Uid:83c71dc2-86f7-48fb-b8ff-10e55f7b0627,Namespace:kube-system,Attempt:0,}" Feb 9 18:25:12.743200 env[1143]: time="2024-02-09T18:25:12.743128036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:25:12.743200 env[1143]: time="2024-02-09T18:25:12.743174513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:25:12.743384 env[1143]: time="2024-02-09T18:25:12.743184673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:25:12.743612 env[1143]: time="2024-02-09T18:25:12.743574129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a pid=3788 runtime=io.containerd.runc.v2 Feb 9 18:25:12.754399 systemd[1]: Started cri-containerd-fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a.scope. Feb 9 18:25:12.792747 env[1143]: time="2024-02-09T18:25:12.792705558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d9x9p,Uid:83c71dc2-86f7-48fb-b8ff-10e55f7b0627,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\"" Feb 9 18:25:12.793446 kubelet[1976]: E0209 18:25:12.793424 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:12.795628 env[1143]: time="2024-02-09T18:25:12.795590506Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:25:12.809803 env[1143]: time="2024-02-09T18:25:12.809765020Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"14994feb443687a4e886055a4cb980cb1dfcab0187673cf6a0513ca6df0b097d\"" Feb 9 18:25:12.811192 env[1143]: time="2024-02-09T18:25:12.810335506Z" level=info msg="StartContainer for \"14994feb443687a4e886055a4cb980cb1dfcab0187673cf6a0513ca6df0b097d\"" Feb 9 18:25:12.824209 systemd[1]: Started cri-containerd-14994feb443687a4e886055a4cb980cb1dfcab0187673cf6a0513ca6df0b097d.scope. Feb 9 18:25:12.861408 env[1143]: time="2024-02-09T18:25:12.861365141Z" level=info msg="StartContainer for \"14994feb443687a4e886055a4cb980cb1dfcab0187673cf6a0513ca6df0b097d\" returns successfully" Feb 9 18:25:12.867223 systemd[1]: cri-containerd-14994feb443687a4e886055a4cb980cb1dfcab0187673cf6a0513ca6df0b097d.scope: Deactivated successfully. Feb 9 18:25:12.896231 env[1143]: time="2024-02-09T18:25:12.896171664Z" level=info msg="shim disconnected" id=14994feb443687a4e886055a4cb980cb1dfcab0187673cf6a0513ca6df0b097d Feb 9 18:25:12.896231 env[1143]: time="2024-02-09T18:25:12.896229421Z" level=warning msg="cleaning up after shim disconnected" id=14994feb443687a4e886055a4cb980cb1dfcab0187673cf6a0513ca6df0b097d namespace=k8s.io Feb 9 18:25:12.896231 env[1143]: time="2024-02-09T18:25:12.896239580Z" level=info msg="cleaning up dead shim" Feb 9 18:25:12.903549 env[1143]: time="2024-02-09T18:25:12.903511186Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:25:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3874 runtime=io.containerd.runc.v2\n" Feb 9 18:25:13.203662 kubelet[1976]: I0209 18:25:13.203557 1976 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b5ab2218-4099-484e-a3d6-ccbd515590b3" path="/var/lib/kubelet/pods/b5ab2218-4099-484e-a3d6-ccbd515590b3/volumes" Feb 9 18:25:13.268480 kubelet[1976]: E0209 18:25:13.268435 1976 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:25:13.388988 kubelet[1976]: E0209 18:25:13.388961 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:13.395162 env[1143]: time="2024-02-09T18:25:13.395124038Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:25:13.406000 env[1143]: time="2024-02-09T18:25:13.405953003Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2205b809a9788dc3bbbd40b6217517a4df8b0c5154d4e2466d53f0f6f384b303\"" Feb 9 18:25:13.406554 env[1143]: time="2024-02-09T18:25:13.406516132Z" level=info msg="StartContainer for \"2205b809a9788dc3bbbd40b6217517a4df8b0c5154d4e2466d53f0f6f384b303\"" Feb 9 18:25:13.419791 systemd[1]: Started cri-containerd-2205b809a9788dc3bbbd40b6217517a4df8b0c5154d4e2466d53f0f6f384b303.scope. Feb 9 18:25:13.449989 env[1143]: time="2024-02-09T18:25:13.449947186Z" level=info msg="StartContainer for \"2205b809a9788dc3bbbd40b6217517a4df8b0c5154d4e2466d53f0f6f384b303\" returns successfully" Feb 9 18:25:13.455655 systemd[1]: cri-containerd-2205b809a9788dc3bbbd40b6217517a4df8b0c5154d4e2466d53f0f6f384b303.scope: Deactivated successfully. Feb 9 18:25:13.476635 env[1143]: time="2024-02-09T18:25:13.476592723Z" level=info msg="shim disconnected" id=2205b809a9788dc3bbbd40b6217517a4df8b0c5154d4e2466d53f0f6f384b303 Feb 9 18:25:13.476826 env[1143]: time="2024-02-09T18:25:13.476807791Z" level=warning msg="cleaning up after shim disconnected" id=2205b809a9788dc3bbbd40b6217517a4df8b0c5154d4e2466d53f0f6f384b303 namespace=k8s.io Feb 9 18:25:13.476929 env[1143]: time="2024-02-09T18:25:13.476914425Z" level=info msg="cleaning up dead shim" Feb 9 18:25:13.483538 env[1143]: time="2024-02-09T18:25:13.483507143Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:25:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3936 runtime=io.containerd.runc.v2\n" Feb 9 18:25:14.392065 kubelet[1976]: E0209 18:25:14.392026 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:14.393817 env[1143]: time="2024-02-09T18:25:14.393779702Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:25:14.405925 env[1143]: time="2024-02-09T18:25:14.405879133Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"84de95e1eebf1e100c286974774eab83ddd21ef490ffdf62b1bcca5f4bb7d4d1\"" Feb 9 18:25:14.407032 env[1143]: time="2024-02-09T18:25:14.407005397Z" level=info msg="StartContainer for \"84de95e1eebf1e100c286974774eab83ddd21ef490ffdf62b1bcca5f4bb7d4d1\"" Feb 9 18:25:14.422010 systemd[1]: Started cri-containerd-84de95e1eebf1e100c286974774eab83ddd21ef490ffdf62b1bcca5f4bb7d4d1.scope. Feb 9 18:25:14.457478 systemd[1]: cri-containerd-84de95e1eebf1e100c286974774eab83ddd21ef490ffdf62b1bcca5f4bb7d4d1.scope: Deactivated successfully. Feb 9 18:25:14.458490 env[1143]: time="2024-02-09T18:25:14.457831078Z" level=info msg="StartContainer for \"84de95e1eebf1e100c286974774eab83ddd21ef490ffdf62b1bcca5f4bb7d4d1\" returns successfully" Feb 9 18:25:14.477688 env[1143]: time="2024-02-09T18:25:14.477646081Z" level=info msg="shim disconnected" id=84de95e1eebf1e100c286974774eab83ddd21ef490ffdf62b1bcca5f4bb7d4d1 Feb 9 18:25:14.477932 env[1143]: time="2024-02-09T18:25:14.477913067Z" level=warning msg="cleaning up after shim disconnected" id=84de95e1eebf1e100c286974774eab83ddd21ef490ffdf62b1bcca5f4bb7d4d1 namespace=k8s.io Feb 9 18:25:14.478000 env[1143]: time="2024-02-09T18:25:14.477984424Z" level=info msg="cleaning up dead shim" Feb 9 18:25:14.484274 env[1143]: time="2024-02-09T18:25:14.484237869Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:25:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3992 runtime=io.containerd.runc.v2\n" Feb 9 18:25:14.511866 kubelet[1976]: I0209 18:25:14.507788 1976 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T18:25:14Z","lastTransitionTime":"2024-02-09T18:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 18:25:14.667454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84de95e1eebf1e100c286974774eab83ddd21ef490ffdf62b1bcca5f4bb7d4d1-rootfs.mount: Deactivated successfully. Feb 9 18:25:15.396160 kubelet[1976]: E0209 18:25:15.396134 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:15.398227 env[1143]: time="2024-02-09T18:25:15.398171909Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:25:15.412500 env[1143]: time="2024-02-09T18:25:15.412456893Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0\"" Feb 9 18:25:15.413006 env[1143]: time="2024-02-09T18:25:15.412969389Z" level=info msg="StartContainer for \"3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0\"" Feb 9 18:25:15.431141 systemd[1]: Started cri-containerd-3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0.scope. Feb 9 18:25:15.457151 systemd[1]: cri-containerd-3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0.scope: Deactivated successfully. Feb 9 18:25:15.458013 env[1143]: time="2024-02-09T18:25:15.457935486Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83c71dc2_86f7_48fb_b8ff_10e55f7b0627.slice/cri-containerd-3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0.scope/memory.events\": no such file or directory" Feb 9 18:25:15.459632 env[1143]: time="2024-02-09T18:25:15.459581170Z" level=info msg="StartContainer for \"3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0\" returns successfully" Feb 9 18:25:15.477771 env[1143]: time="2024-02-09T18:25:15.477684140Z" level=info msg="shim disconnected" id=3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0 Feb 9 18:25:15.477771 env[1143]: time="2024-02-09T18:25:15.477730697Z" level=warning msg="cleaning up after shim disconnected" id=3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0 namespace=k8s.io Feb 9 18:25:15.477771 env[1143]: time="2024-02-09T18:25:15.477740497Z" level=info msg="cleaning up dead shim" Feb 9 18:25:15.483914 env[1143]: time="2024-02-09T18:25:15.483832177Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:25:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4047 runtime=io.containerd.runc.v2\n" Feb 9 18:25:15.667512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f06d81067e6c06d7104c33ad368ee87fb10248c72a1e9645850b21faf129ba0-rootfs.mount: Deactivated successfully. Feb 9 18:25:16.400788 kubelet[1976]: E0209 18:25:16.400619 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:16.402906 env[1143]: time="2024-02-09T18:25:16.402866771Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:25:16.422599 env[1143]: time="2024-02-09T18:25:16.422545273Z" level=info msg="CreateContainer within sandbox \"fe23910d42dc871ddd451482dbed0194fdd59acac0447bf3d74195a95871898a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74a2dbba40e0054694f732e8d2130c0e34548bdaee2b119ccd5f423d45f0bc55\"" Feb 9 18:25:16.423754 env[1143]: time="2024-02-09T18:25:16.423724064Z" level=info msg="StartContainer for \"74a2dbba40e0054694f732e8d2130c0e34548bdaee2b119ccd5f423d45f0bc55\"" Feb 9 18:25:16.443153 systemd[1]: Started cri-containerd-74a2dbba40e0054694f732e8d2130c0e34548bdaee2b119ccd5f423d45f0bc55.scope. Feb 9 18:25:16.476632 env[1143]: time="2024-02-09T18:25:16.476580026Z" level=info msg="StartContainer for \"74a2dbba40e0054694f732e8d2130c0e34548bdaee2b119ccd5f423d45f0bc55\" returns successfully" Feb 9 18:25:16.741858 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 18:25:17.405450 kubelet[1976]: E0209 18:25:17.405419 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:17.417931 kubelet[1976]: I0209 18:25:17.417905 1976 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d9x9p" podStartSLOduration=5.417873742 podCreationTimestamp="2024-02-09 18:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:25:17.417252965 +0000 UTC m=+84.319193282" watchObservedRunningTime="2024-02-09 18:25:17.417873742 +0000 UTC m=+84.319814059" Feb 9 18:25:18.200908 kubelet[1976]: E0209 18:25:18.200876 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:18.730639 kubelet[1976]: E0209 18:25:18.730610 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:18.760228 systemd[1]: run-containerd-runc-k8s.io-74a2dbba40e0054694f732e8d2130c0e34548bdaee2b119ccd5f423d45f0bc55-runc.dSQAMc.mount: Deactivated successfully. Feb 9 18:25:19.200696 kubelet[1976]: E0209 18:25:19.200580 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:19.528818 systemd-networkd[1039]: lxc_health: Link UP Feb 9 18:25:19.537361 systemd-networkd[1039]: lxc_health: Gained carrier Feb 9 18:25:19.537944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:25:20.623953 systemd-networkd[1039]: lxc_health: Gained IPv6LL Feb 9 18:25:20.729317 kubelet[1976]: E0209 18:25:20.729290 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:21.413777 kubelet[1976]: E0209 18:25:21.413602 1976 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:25:25.242356 sshd[3757]: pam_unix(sshd:session): session closed for user core Feb 9 18:25:25.244621 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:42924.service: Deactivated successfully. Feb 9 18:25:25.245355 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:25:25.245913 systemd-logind[1131]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:25:25.246566 systemd-logind[1131]: Removed session 24.