Dec 13 14:08:45.720772 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:08:45.720792 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:08:45.720800 kernel: efi: EFI v2.70 by EDK II Dec 13 14:08:45.720806 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Dec 13 14:08:45.720811 kernel: random: crng init done Dec 13 14:08:45.720816 kernel: ACPI: Early table checksum verification disabled Dec 13 14:08:45.720823 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Dec 13 14:08:45.720829 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:08:45.720835 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720840 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720845 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720850 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720856 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720861 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720869 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720875 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720881 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:08:45.720887 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 14:08:45.720897 kernel: NUMA: Failed to initialise from firmware Dec 13 14:08:45.720904 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:08:45.720911 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] Dec 13 14:08:45.720916 kernel: Zone ranges: Dec 13 14:08:45.720922 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:08:45.720929 kernel: DMA32 empty Dec 13 14:08:45.720934 kernel: Normal empty Dec 13 14:08:45.720940 kernel: Movable zone start for each node Dec 13 14:08:45.720945 kernel: Early memory node ranges Dec 13 14:08:45.720951 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Dec 13 14:08:45.720957 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Dec 13 14:08:45.720962 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Dec 13 14:08:45.720968 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Dec 13 14:08:45.720974 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Dec 13 14:08:45.720979 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Dec 13 14:08:45.720985 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Dec 13 14:08:45.720990 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:08:45.720997 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 14:08:45.721002 kernel: psci: probing for conduit method from ACPI. Dec 13 14:08:45.721008 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:08:45.721014 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:08:45.721020 kernel: psci: Trusted OS migration not required Dec 13 14:08:45.721028 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:08:45.721034 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 14:08:45.721041 kernel: ACPI: SRAT not present Dec 13 14:08:45.721047 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:08:45.721054 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:08:45.721060 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 14:08:45.721066 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:08:45.721072 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:08:45.721078 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:08:45.721084 kernel: CPU features: detected: Spectre-v4 Dec 13 14:08:45.721090 kernel: CPU features: detected: Spectre-BHB Dec 13 14:08:45.721097 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:08:45.721103 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:08:45.721109 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:08:45.721115 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:08:45.721121 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 14:08:45.721127 kernel: Policy zone: DMA Dec 13 14:08:45.721134 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:08:45.721141 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:08:45.721147 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:08:45.721153 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:08:45.721159 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:08:45.721167 kernel: Memory: 2457408K/2572288K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 114880K reserved, 0K cma-reserved) Dec 13 14:08:45.721173 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:08:45.721179 kernel: trace event string verifier disabled Dec 13 14:08:45.721185 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:08:45.721191 kernel: rcu: RCU event tracing is enabled. Dec 13 14:08:45.721198 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:08:45.721204 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:08:45.721210 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:08:45.721216 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:08:45.721222 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:08:45.721228 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:08:45.721235 kernel: GICv3: 256 SPIs implemented Dec 13 14:08:45.721241 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:08:45.721247 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:08:45.721253 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:08:45.721259 kernel: GICv3: 16 PPIs implemented Dec 13 14:08:45.721265 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 14:08:45.721271 kernel: ACPI: SRAT not present Dec 13 14:08:45.721277 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 14:08:45.721283 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:08:45.721289 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:08:45.721295 kernel: GICv3: using LPI property table @0x00000000400d0000 Dec 13 14:08:45.721301 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Dec 13 14:08:45.721308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:08:45.721314 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:08:45.721321 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:08:45.721327 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:08:45.721333 kernel: arm-pv: using stolen time PV Dec 13 14:08:45.721339 kernel: Console: colour dummy device 80x25 Dec 13 14:08:45.721346 kernel: ACPI: Core revision 20210730 Dec 13 14:08:45.721352 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:08:45.721359 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:08:45.721365 kernel: LSM: Security Framework initializing Dec 13 14:08:45.721372 kernel: SELinux: Initializing. Dec 13 14:08:45.721378 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:08:45.721384 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:08:45.721390 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:08:45.721397 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 14:08:45.721403 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 14:08:45.721409 kernel: Remapping and enabling EFI services. Dec 13 14:08:45.721415 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:08:45.721421 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:08:45.721429 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 14:08:45.721435 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Dec 13 14:08:45.721441 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:08:45.721447 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:08:45.721454 kernel: Detected PIPT I-cache on CPU2 Dec 13 14:08:45.721460 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 14:08:45.721466 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Dec 13 14:08:45.721473 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:08:45.721479 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 14:08:45.721485 kernel: Detected PIPT I-cache on CPU3 Dec 13 14:08:45.721492 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 14:08:45.721498 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Dec 13 14:08:45.721505 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:08:45.721511 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 14:08:45.721522 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:08:45.721530 kernel: SMP: Total of 4 processors activated. Dec 13 14:08:45.721536 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:08:45.721543 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:08:45.721549 kernel: CPU features: detected: Common not Private translations Dec 13 14:08:45.721556 kernel: CPU features: detected: CRC32 instructions Dec 13 14:08:45.721562 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:08:45.721569 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:08:45.721576 kernel: CPU features: detected: Privileged Access Never Dec 13 14:08:45.721583 kernel: CPU features: detected: RAS Extension Support Dec 13 14:08:45.721590 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:08:45.721596 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:08:45.721603 kernel: alternatives: patching kernel code Dec 13 14:08:45.721610 kernel: devtmpfs: initialized Dec 13 14:08:45.721617 kernel: KASLR enabled Dec 13 14:08:45.721623 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:08:45.721630 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:08:45.721640 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:08:45.721646 kernel: SMBIOS 3.0.0 present. Dec 13 14:08:45.721653 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 13 14:08:45.721660 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:08:45.721666 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:08:45.721674 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:08:45.721681 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:08:45.721687 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:08:45.721694 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Dec 13 14:08:45.721700 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:08:45.721707 kernel: cpuidle: using governor menu Dec 13 14:08:45.721713 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:08:45.721720 kernel: ASID allocator initialised with 32768 entries Dec 13 14:08:45.721727 kernel: ACPI: bus type PCI registered Dec 13 14:08:45.721734 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:08:45.721741 kernel: Serial: AMBA PL011 UART driver Dec 13 14:08:45.721755 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:08:45.721763 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:08:45.721770 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:08:45.721776 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:08:45.721783 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:08:45.721790 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:08:45.721796 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:08:45.721804 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:08:45.721811 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:08:45.721818 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:08:45.721824 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:08:45.721831 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:08:45.721837 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:08:45.721844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:08:45.721851 kernel: ACPI: Interpreter enabled Dec 13 14:08:45.721857 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:08:45.721865 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:08:45.721871 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:08:45.721878 kernel: printk: console [ttyAMA0] enabled Dec 13 14:08:45.721884 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:08:45.722006 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:08:45.722067 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:08:45.722123 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:08:45.722180 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 14:08:45.722234 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 14:08:45.722242 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 14:08:45.722249 kernel: PCI host bridge to bus 0000:00 Dec 13 14:08:45.722309 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 14:08:45.722361 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:08:45.722412 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 14:08:45.722463 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:08:45.722533 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 14:08:45.722603 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:08:45.722661 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 14:08:45.722720 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 14:08:45.722814 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:08:45.722877 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:08:45.722951 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 14:08:45.723012 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 14:08:45.723066 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 14:08:45.723118 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:08:45.723169 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 14:08:45.723178 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:08:45.723185 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:08:45.723192 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:08:45.723204 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:08:45.723211 kernel: iommu: Default domain type: Translated Dec 13 14:08:45.723218 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:08:45.723225 kernel: vgaarb: loaded Dec 13 14:08:45.723232 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:08:45.723238 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:08:45.723245 kernel: PTP clock support registered Dec 13 14:08:45.723252 kernel: Registered efivars operations Dec 13 14:08:45.723259 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:08:45.723267 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:08:45.723274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:08:45.723281 kernel: pnp: PnP ACPI init Dec 13 14:08:45.723346 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 14:08:45.723355 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:08:45.723362 kernel: NET: Registered PF_INET protocol family Dec 13 14:08:45.723369 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:08:45.723376 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:08:45.723384 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:08:45.723391 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:08:45.723398 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:08:45.723405 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:08:45.723411 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:08:45.723421 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:08:45.723428 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:08:45.723435 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:08:45.723442 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:08:45.723450 kernel: kvm [1]: HYP mode not available Dec 13 14:08:45.723457 kernel: Initialise system trusted keyrings Dec 13 14:08:45.723463 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:08:45.723470 kernel: Key type asymmetric registered Dec 13 14:08:45.723476 kernel: Asymmetric key parser 'x509' registered Dec 13 14:08:45.723483 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:08:45.723490 kernel: io scheduler mq-deadline registered Dec 13 14:08:45.723497 kernel: io scheduler kyber registered Dec 13 14:08:45.723503 kernel: io scheduler bfq registered Dec 13 14:08:45.723511 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:08:45.723518 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:08:45.723525 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:08:45.723584 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 14:08:45.723593 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:08:45.723600 kernel: thunder_xcv, ver 1.0 Dec 13 14:08:45.723606 kernel: thunder_bgx, ver 1.0 Dec 13 14:08:45.723613 kernel: nicpf, ver 1.0 Dec 13 14:08:45.723620 kernel: nicvf, ver 1.0 Dec 13 14:08:45.723690 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:08:45.723745 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:08:45 UTC (1734098925) Dec 13 14:08:45.723769 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:08:45.723776 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:08:45.723782 kernel: Segment Routing with IPv6 Dec 13 14:08:45.723789 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:08:45.723796 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:08:45.723802 kernel: Key type dns_resolver registered Dec 13 14:08:45.723811 kernel: registered taskstats version 1 Dec 13 14:08:45.723818 kernel: Loading compiled-in X.509 certificates Dec 13 14:08:45.723825 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:08:45.723831 kernel: Key type .fscrypt registered Dec 13 14:08:45.723838 kernel: Key type fscrypt-provisioning registered Dec 13 14:08:45.723845 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:08:45.723852 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:08:45.723858 kernel: ima: No architecture policies found Dec 13 14:08:45.723865 kernel: clk: Disabling unused clocks Dec 13 14:08:45.723873 kernel: Freeing unused kernel memory: 36416K Dec 13 14:08:45.723879 kernel: Run /init as init process Dec 13 14:08:45.723886 kernel: with arguments: Dec 13 14:08:45.723898 kernel: /init Dec 13 14:08:45.723905 kernel: with environment: Dec 13 14:08:45.723911 kernel: HOME=/ Dec 13 14:08:45.723918 kernel: TERM=linux Dec 13 14:08:45.723924 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:08:45.723933 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:08:45.723943 systemd[1]: Detected virtualization kvm. Dec 13 14:08:45.723951 systemd[1]: Detected architecture arm64. Dec 13 14:08:45.723958 systemd[1]: Running in initrd. Dec 13 14:08:45.723965 systemd[1]: No hostname configured, using default hostname. Dec 13 14:08:45.723971 systemd[1]: Hostname set to . Dec 13 14:08:45.723979 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:08:45.723986 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:08:45.723994 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:08:45.724001 systemd[1]: Reached target cryptsetup.target. Dec 13 14:08:45.724008 systemd[1]: Reached target paths.target. Dec 13 14:08:45.724015 systemd[1]: Reached target slices.target. Dec 13 14:08:45.724022 systemd[1]: Reached target swap.target. Dec 13 14:08:45.724029 systemd[1]: Reached target timers.target. Dec 13 14:08:45.724037 systemd[1]: Listening on iscsid.socket. Dec 13 14:08:45.724045 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:08:45.724052 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:08:45.724059 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:08:45.724066 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:08:45.724073 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:08:45.724080 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:08:45.724088 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:08:45.724095 systemd[1]: Reached target sockets.target. Dec 13 14:08:45.724102 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:08:45.724110 systemd[1]: Finished network-cleanup.service. Dec 13 14:08:45.724118 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:08:45.724125 systemd[1]: Starting systemd-journald.service... Dec 13 14:08:45.724132 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:08:45.724139 systemd[1]: Starting systemd-resolved.service... Dec 13 14:08:45.724146 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:08:45.724153 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:08:45.724160 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:08:45.724167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:08:45.724175 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:08:45.724182 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:08:45.724192 systemd-journald[290]: Journal started Dec 13 14:08:45.724236 systemd-journald[290]: Runtime Journal (/run/log/journal/3866698727d741a9b4e0783c32fecfd6) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:08:45.713476 systemd-modules-load[291]: Inserted module 'overlay' Dec 13 14:08:45.725955 systemd[1]: Started systemd-journald.service. Dec 13 14:08:45.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.730125 kernel: audit: type=1130 audit(1734098925.725:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.728239 systemd-resolved[292]: Positive Trust Anchors: Dec 13 14:08:45.728245 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:08:45.728272 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:08:45.732931 systemd-resolved[292]: Defaulting to hostname 'linux'. Dec 13 14:08:45.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.733669 systemd[1]: Started systemd-resolved.service. Dec 13 14:08:45.742635 kernel: audit: type=1130 audit(1734098925.735:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.742651 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:08:45.742660 kernel: audit: type=1130 audit(1734098925.739:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.736441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:08:45.739433 systemd[1]: Reached target nss-lookup.target. Dec 13 14:08:45.744462 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:08:45.747672 kernel: Bridge firewalling registered Dec 13 14:08:45.747692 kernel: audit: type=1130 audit(1734098925.745:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.745462 systemd-modules-load[291]: Inserted module 'br_netfilter' Dec 13 14:08:45.747690 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:08:45.755937 dracut-cmdline[310]: dracut-dracut-053 Dec 13 14:08:45.758077 kernel: SCSI subsystem initialized Dec 13 14:08:45.758195 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:08:45.765246 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:08:45.765291 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:08:45.765301 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:08:45.767435 systemd-modules-load[291]: Inserted module 'dm_multipath' Dec 13 14:08:45.768152 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:08:45.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.771040 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:08:45.772563 kernel: audit: type=1130 audit(1734098925.768:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.777392 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:08:45.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.780775 kernel: audit: type=1130 audit(1734098925.777:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.815770 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:08:45.827777 kernel: iscsi: registered transport (tcp) Dec 13 14:08:45.841772 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:08:45.841799 kernel: QLogic iSCSI HBA Driver Dec 13 14:08:45.873913 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:08:45.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.875238 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:08:45.877626 kernel: audit: type=1130 audit(1734098925.873:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:45.919765 kernel: raid6: neonx8 gen() 13680 MB/s Dec 13 14:08:45.936768 kernel: raid6: neonx8 xor() 10781 MB/s Dec 13 14:08:45.953761 kernel: raid6: neonx4 gen() 13538 MB/s Dec 13 14:08:45.970768 kernel: raid6: neonx4 xor() 11229 MB/s Dec 13 14:08:45.987771 kernel: raid6: neonx2 gen() 12974 MB/s Dec 13 14:08:46.004763 kernel: raid6: neonx2 xor() 10546 MB/s Dec 13 14:08:46.021769 kernel: raid6: neonx1 gen() 10535 MB/s Dec 13 14:08:46.038771 kernel: raid6: neonx1 xor() 8740 MB/s Dec 13 14:08:46.055762 kernel: raid6: int64x8 gen() 6257 MB/s Dec 13 14:08:46.072761 kernel: raid6: int64x8 xor() 3543 MB/s Dec 13 14:08:46.089767 kernel: raid6: int64x4 gen() 7221 MB/s Dec 13 14:08:46.106774 kernel: raid6: int64x4 xor() 3852 MB/s Dec 13 14:08:46.123768 kernel: raid6: int64x2 gen() 6130 MB/s Dec 13 14:08:46.140771 kernel: raid6: int64x2 xor() 3319 MB/s Dec 13 14:08:46.157761 kernel: raid6: int64x1 gen() 5040 MB/s Dec 13 14:08:46.174936 kernel: raid6: int64x1 xor() 2646 MB/s Dec 13 14:08:46.174948 kernel: raid6: using algorithm neonx8 gen() 13680 MB/s Dec 13 14:08:46.174957 kernel: raid6: .... xor() 10781 MB/s, rmw enabled Dec 13 14:08:46.174970 kernel: raid6: using neon recovery algorithm Dec 13 14:08:46.186086 kernel: xor: measuring software checksum speed Dec 13 14:08:46.186113 kernel: 8regs : 17227 MB/sec Dec 13 14:08:46.186129 kernel: 32regs : 20744 MB/sec Dec 13 14:08:46.186993 kernel: arm64_neon : 27626 MB/sec Dec 13 14:08:46.187006 kernel: xor: using function: arm64_neon (27626 MB/sec) Dec 13 14:08:46.242779 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:08:46.252457 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:08:46.255839 kernel: audit: type=1130 audit(1734098926.252:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:46.255860 kernel: audit: type=1334 audit(1734098926.255:10): prog-id=7 op=LOAD Dec 13 14:08:46.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:46.255000 audit: BPF prog-id=7 op=LOAD Dec 13 14:08:46.255000 audit: BPF prog-id=8 op=LOAD Dec 13 14:08:46.256180 systemd[1]: Starting systemd-udevd.service... Dec 13 14:08:46.269695 systemd-udevd[492]: Using default interface naming scheme 'v252'. Dec 13 14:08:46.272969 systemd[1]: Started systemd-udevd.service. Dec 13 14:08:46.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:46.280731 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:08:46.290916 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Dec 13 14:08:46.316213 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:08:46.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:46.317512 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:08:46.349161 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:08:46.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:46.383959 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:08:46.387652 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:08:46.387673 kernel: GPT:9289727 != 19775487 Dec 13 14:08:46.387682 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:08:46.387691 kernel: GPT:9289727 != 19775487 Dec 13 14:08:46.387698 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:08:46.387706 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:08:46.401771 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (548) Dec 13 14:08:46.402939 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:08:46.403976 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:08:46.410042 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:08:46.416837 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:08:46.419900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:08:46.422036 systemd[1]: Starting disk-uuid.service... Dec 13 14:08:46.427623 disk-uuid[568]: Primary Header is updated. Dec 13 14:08:46.427623 disk-uuid[568]: Secondary Entries is updated. Dec 13 14:08:46.427623 disk-uuid[568]: Secondary Header is updated. Dec 13 14:08:46.430774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:08:47.442098 disk-uuid[569]: The operation has completed successfully. Dec 13 14:08:47.443042 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:08:47.464262 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:08:47.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.464355 systemd[1]: Finished disk-uuid.service. Dec 13 14:08:47.468177 systemd[1]: Starting verity-setup.service... Dec 13 14:08:47.480787 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:08:47.499528 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:08:47.501536 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:08:47.503346 systemd[1]: Finished verity-setup.service. Dec 13 14:08:47.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.549496 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:08:47.550486 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:08:47.550155 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:08:47.550823 systemd[1]: Starting ignition-setup.service... Dec 13 14:08:47.552505 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:08:47.559146 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:08:47.559177 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:08:47.559188 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:08:47.566798 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:08:47.571915 systemd[1]: Finished ignition-setup.service. Dec 13 14:08:47.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.573282 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:08:47.642459 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:08:47.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.643000 audit: BPF prog-id=9 op=LOAD Dec 13 14:08:47.644330 systemd[1]: Starting systemd-networkd.service... Dec 13 14:08:47.656385 ignition[655]: Ignition 2.14.0 Dec 13 14:08:47.656397 ignition[655]: Stage: fetch-offline Dec 13 14:08:47.656444 ignition[655]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:08:47.656453 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:08:47.656588 ignition[655]: parsed url from cmdline: "" Dec 13 14:08:47.656591 ignition[655]: no config URL provided Dec 13 14:08:47.656596 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:08:47.656603 ignition[655]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:08:47.656620 ignition[655]: op(1): [started] loading QEMU firmware config module Dec 13 14:08:47.656625 ignition[655]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:08:47.664595 ignition[655]: op(1): [finished] loading QEMU firmware config module Dec 13 14:08:47.668362 systemd-networkd[744]: lo: Link UP Dec 13 14:08:47.668376 systemd-networkd[744]: lo: Gained carrier Dec 13 14:08:47.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.668716 systemd-networkd[744]: Enumeration completed Dec 13 14:08:47.668914 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:08:47.668988 systemd[1]: Started systemd-networkd.service. Dec 13 14:08:47.669772 systemd-networkd[744]: eth0: Link UP Dec 13 14:08:47.669775 systemd-networkd[744]: eth0: Gained carrier Dec 13 14:08:47.670262 systemd[1]: Reached target network.target. Dec 13 14:08:47.672031 systemd[1]: Starting iscsiuio.service... Dec 13 14:08:47.680736 systemd[1]: Started iscsiuio.service. Dec 13 14:08:47.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.682512 systemd[1]: Starting iscsid.service... Dec 13 14:08:47.685660 iscsid[751]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:08:47.685660 iscsid[751]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:08:47.685660 iscsid[751]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:08:47.685660 iscsid[751]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:08:47.685660 iscsid[751]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:08:47.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.695504 iscsid[751]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:08:47.688448 systemd[1]: Started iscsid.service. Dec 13 14:08:47.688475 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:08:47.692795 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:08:47.702721 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:08:47.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.703544 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:08:47.704724 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:08:47.706005 systemd[1]: Reached target remote-fs.target. Dec 13 14:08:47.707879 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:08:47.715317 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:08:47.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.724034 ignition[655]: parsing config with SHA512: 45cfc0899cc2c394a8950514ec5d93ba91d15217ceaf9c1b9f8228d138329d9e0cb7601c3f6af185eff37603630234a492ef6969f5dd39735f786ed30ce1efa8 Dec 13 14:08:47.730674 unknown[655]: fetched base config from "system" Dec 13 14:08:47.730687 unknown[655]: fetched user config from "qemu" Dec 13 14:08:47.731236 ignition[655]: fetch-offline: fetch-offline passed Dec 13 14:08:47.732128 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:08:47.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.731291 ignition[655]: Ignition finished successfully Dec 13 14:08:47.733458 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:08:47.734164 systemd[1]: Starting ignition-kargs.service... Dec 13 14:08:47.742950 ignition[765]: Ignition 2.14.0 Dec 13 14:08:47.742965 ignition[765]: Stage: kargs Dec 13 14:08:47.743051 ignition[765]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:08:47.743060 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:08:47.743943 ignition[765]: kargs: kargs passed Dec 13 14:08:47.745480 systemd[1]: Finished ignition-kargs.service. Dec 13 14:08:47.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.743982 ignition[765]: Ignition finished successfully Dec 13 14:08:47.747052 systemd[1]: Starting ignition-disks.service... Dec 13 14:08:47.753142 ignition[771]: Ignition 2.14.0 Dec 13 14:08:47.753151 ignition[771]: Stage: disks Dec 13 14:08:47.753236 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:08:47.753245 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:08:47.754198 ignition[771]: disks: disks passed Dec 13 14:08:47.755530 systemd[1]: Finished ignition-disks.service. Dec 13 14:08:47.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.754240 ignition[771]: Ignition finished successfully Dec 13 14:08:47.756595 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:08:47.757516 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:08:47.758508 systemd[1]: Reached target local-fs.target. Dec 13 14:08:47.759529 systemd[1]: Reached target sysinit.target. Dec 13 14:08:47.760509 systemd[1]: Reached target basic.target. Dec 13 14:08:47.762216 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:08:47.772791 systemd-fsck[779]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:08:47.775599 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:08:47.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.777062 systemd[1]: Mounting sysroot.mount... Dec 13 14:08:47.781763 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:08:47.782056 systemd[1]: Mounted sysroot.mount. Dec 13 14:08:47.782798 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:08:47.785245 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:08:47.786114 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:08:47.786152 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:08:47.786175 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:08:47.787980 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:08:47.789405 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:08:47.793520 initrd-setup-root[789]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:08:47.797940 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:08:47.801460 initrd-setup-root[805]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:08:47.805287 initrd-setup-root[813]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:08:47.831719 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:08:47.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.833066 systemd[1]: Starting ignition-mount.service... Dec 13 14:08:47.834171 systemd[1]: Starting sysroot-boot.service... Dec 13 14:08:47.838791 bash[830]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:08:47.846104 ignition[832]: INFO : Ignition 2.14.0 Dec 13 14:08:47.846104 ignition[832]: INFO : Stage: mount Dec 13 14:08:47.847357 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:08:47.847357 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:08:47.847357 ignition[832]: INFO : mount: mount passed Dec 13 14:08:47.847357 ignition[832]: INFO : Ignition finished successfully Dec 13 14:08:47.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:47.847877 systemd[1]: Finished ignition-mount.service. Dec 13 14:08:47.852198 systemd[1]: Finished sysroot-boot.service. Dec 13 14:08:47.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:48.509983 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:08:48.516416 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (840) Dec 13 14:08:48.516460 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:08:48.516479 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:08:48.516861 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:08:48.519947 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:08:48.521205 systemd[1]: Starting ignition-files.service... Dec 13 14:08:48.534316 ignition[860]: INFO : Ignition 2.14.0 Dec 13 14:08:48.534316 ignition[860]: INFO : Stage: files Dec 13 14:08:48.535445 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:08:48.535445 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:08:48.535445 ignition[860]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:08:48.539485 ignition[860]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:08:48.539485 ignition[860]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:08:48.541783 ignition[860]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:08:48.541783 ignition[860]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:08:48.543642 ignition[860]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:08:48.543642 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:08:48.543642 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:08:48.541930 unknown[860]: wrote ssh authorized keys file for user: core Dec 13 14:08:48.616456 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:08:48.838324 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:08:48.839815 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:08:48.841028 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 14:08:49.183321 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:08:49.240145 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:08:49.240145 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:08:49.242711 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 14:08:49.476786 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:08:49.692512 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:08:49.692512 ignition[860]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 14:08:49.695041 ignition[860]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:08:49.696764 ignition[860]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:08:49.696764 ignition[860]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 14:08:49.696764 ignition[860]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 14:08:49.696764 ignition[860]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:08:49.696764 ignition[860]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:08:49.696764 ignition[860]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 14:08:49.696764 ignition[860]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:08:49.696764 ignition[860]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:08:49.728027 systemd-networkd[744]: eth0: Gained IPv6LL Dec 13 14:08:49.740448 ignition[860]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:08:49.741572 ignition[860]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:08:49.741572 ignition[860]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:08:49.741572 ignition[860]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:08:49.741572 ignition[860]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:08:49.741572 ignition[860]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:08:49.741572 ignition[860]: INFO : files: files passed Dec 13 14:08:49.741572 ignition[860]: INFO : Ignition finished successfully Dec 13 14:08:49.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.744943 systemd[1]: Finished ignition-files.service. Dec 13 14:08:49.746915 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:08:49.752370 initrd-setup-root-after-ignition[886]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:08:49.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.747796 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:08:49.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.755727 initrd-setup-root-after-ignition[888]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:08:49.748424 systemd[1]: Starting ignition-quench.service... Dec 13 14:08:49.751984 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:08:49.752062 systemd[1]: Finished ignition-quench.service. Dec 13 14:08:49.753689 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:08:49.754509 systemd[1]: Reached target ignition-complete.target. Dec 13 14:08:49.756816 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:08:49.768329 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:08:49.768412 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:08:49.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.769657 systemd[1]: Reached target initrd-fs.target. Dec 13 14:08:49.770542 systemd[1]: Reached target initrd.target. Dec 13 14:08:49.771478 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:08:49.772131 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:08:49.781923 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:08:49.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.783169 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:08:49.790389 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:08:49.791070 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:08:49.792280 systemd[1]: Stopped target timers.target. Dec 13 14:08:49.793263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:08:49.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.793357 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:08:49.794322 systemd[1]: Stopped target initrd.target. Dec 13 14:08:49.795308 systemd[1]: Stopped target basic.target. Dec 13 14:08:49.796230 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:08:49.797249 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:08:49.798244 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:08:49.799346 systemd[1]: Stopped target remote-fs.target. Dec 13 14:08:49.800367 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:08:49.801494 systemd[1]: Stopped target sysinit.target. Dec 13 14:08:49.802511 systemd[1]: Stopped target local-fs.target. Dec 13 14:08:49.803494 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:08:49.804496 systemd[1]: Stopped target swap.target. Dec 13 14:08:49.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.805404 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:08:49.805513 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:08:49.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.806518 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:08:49.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.807382 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:08:49.807481 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:08:49.808557 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:08:49.808653 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:08:49.809576 systemd[1]: Stopped target paths.target. Dec 13 14:08:49.810398 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:08:49.814803 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:08:49.815604 systemd[1]: Stopped target slices.target. Dec 13 14:08:49.816610 systemd[1]: Stopped target sockets.target. Dec 13 14:08:49.817507 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:08:49.817577 systemd[1]: Closed iscsid.socket. Dec 13 14:08:49.818478 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:08:49.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.818541 systemd[1]: Closed iscsiuio.socket. Dec 13 14:08:49.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.819422 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:08:49.819520 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:08:49.820440 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:08:49.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.820528 systemd[1]: Stopped ignition-files.service. Dec 13 14:08:49.822218 systemd[1]: Stopping ignition-mount.service... Dec 13 14:08:49.822993 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:08:49.823108 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:08:49.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.824858 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:08:49.825809 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:08:49.825940 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:08:49.826960 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:08:49.831158 ignition[901]: INFO : Ignition 2.14.0 Dec 13 14:08:49.831158 ignition[901]: INFO : Stage: umount Dec 13 14:08:49.831158 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:08:49.831158 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:08:49.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.827061 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:08:49.840739 ignition[901]: INFO : umount: umount passed Dec 13 14:08:49.840739 ignition[901]: INFO : Ignition finished successfully Dec 13 14:08:49.830890 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:08:49.830978 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:08:49.832574 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:08:49.832652 systemd[1]: Stopped ignition-mount.service. Dec 13 14:08:49.834072 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:08:49.834302 systemd[1]: Stopped target network.target. Dec 13 14:08:49.834949 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:08:49.834992 systemd[1]: Stopped ignition-disks.service. Dec 13 14:08:49.835619 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:08:49.835651 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:08:49.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.836266 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:08:49.836300 systemd[1]: Stopped ignition-setup.service. Dec 13 14:08:49.837044 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:08:49.838017 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:08:49.847036 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:08:49.847138 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:08:49.847912 systemd-networkd[744]: eth0: DHCPv6 lease lost Dec 13 14:08:49.853665 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:08:49.853786 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:08:49.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.855189 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:08:49.855000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:08:49.855217 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:08:49.857786 systemd[1]: Stopping network-cleanup.service... Dec 13 14:08:49.858000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:08:49.858301 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:08:49.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.858352 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:08:49.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.859548 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:08:49.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.859588 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:08:49.861195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:08:49.861234 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:08:49.861975 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:08:49.867726 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:08:49.870200 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:08:49.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.870302 systemd[1]: Stopped network-cleanup.service. Dec 13 14:08:49.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.871314 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:08:49.871432 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:08:49.872544 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:08:49.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.872618 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:08:49.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.873387 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:08:49.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.873416 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:08:49.874453 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:08:49.874489 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:08:49.875437 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:08:49.875473 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:08:49.876615 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:08:49.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.876651 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:08:49.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:49.878329 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:08:49.879353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:08:49.879401 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:08:49.881501 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:08:49.881587 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:08:49.882394 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:08:49.882431 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:08:49.883443 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:08:49.883514 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:08:49.884593 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:08:49.886185 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:08:49.892320 systemd[1]: Switching root. Dec 13 14:08:49.910801 iscsid[751]: iscsid shutting down. Dec 13 14:08:49.911342 systemd-journald[290]: Journal stopped Dec 13 14:08:51.888794 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Dec 13 14:08:51.888854 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:08:51.888871 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:08:51.888891 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:08:51.888903 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:08:51.888913 kernel: SELinux: policy capability open_perms=1 Dec 13 14:08:51.888923 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:08:51.888933 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:08:51.888942 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:08:51.888952 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:08:51.888962 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:08:51.888971 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:08:51.888982 systemd[1]: Successfully loaded SELinux policy in 32.894ms. Dec 13 14:08:51.888998 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.298ms. Dec 13 14:08:51.889010 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:08:51.889021 systemd[1]: Detected virtualization kvm. Dec 13 14:08:51.889032 systemd[1]: Detected architecture arm64. Dec 13 14:08:51.889043 systemd[1]: Detected first boot. Dec 13 14:08:51.889053 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:08:51.889064 kernel: kauditd_printk_skb: 62 callbacks suppressed Dec 13 14:08:51.889075 kernel: audit: type=1400 audit(1734098930.073:73): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:08:51.889088 kernel: audit: type=1400 audit(1734098930.073:74): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:08:51.889098 kernel: audit: type=1334 audit(1734098930.073:75): prog-id=10 op=LOAD Dec 13 14:08:51.889107 kernel: audit: type=1334 audit(1734098930.073:76): prog-id=10 op=UNLOAD Dec 13 14:08:51.889117 kernel: audit: type=1334 audit(1734098930.075:77): prog-id=11 op=LOAD Dec 13 14:08:51.889126 kernel: audit: type=1334 audit(1734098930.075:78): prog-id=11 op=UNLOAD Dec 13 14:08:51.889136 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:08:51.889146 kernel: audit: type=1400 audit(1734098930.110:79): avc: denied { associate } for pid=934 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:08:51.889159 kernel: audit: type=1300 audit(1734098930.110:79): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd8ac a1=400013ede0 a2=40001450c0 a3=32 items=0 ppid=917 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.889169 kernel: audit: type=1327 audit(1734098930.110:79): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:08:51.889194 kernel: audit: type=1400 audit(1734098930.111:80): avc: denied { associate } for pid=934 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:08:51.889206 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:08:51.889219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:08:51.889229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:08:51.889241 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:08:51.889254 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:08:51.889266 systemd[1]: Stopped iscsiuio.service. Dec 13 14:08:51.889277 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:08:51.889288 systemd[1]: Stopped iscsid.service. Dec 13 14:08:51.889299 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:08:51.889332 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:08:51.889347 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:08:51.889358 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:08:51.889370 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:08:51.889381 systemd[1]: Created slice system-getty.slice. Dec 13 14:08:51.889391 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:08:51.889406 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:08:51.889420 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:08:51.889432 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:08:51.889443 systemd[1]: Created slice user.slice. Dec 13 14:08:51.889453 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:08:51.889464 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:08:51.889474 systemd[1]: Set up automount boot.automount. Dec 13 14:08:51.889484 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:08:51.889494 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:08:51.889504 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:08:51.889514 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:08:51.889525 systemd[1]: Reached target integritysetup.target. Dec 13 14:08:51.889536 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:08:51.889547 systemd[1]: Reached target remote-fs.target. Dec 13 14:08:51.889557 systemd[1]: Reached target slices.target. Dec 13 14:08:51.889567 systemd[1]: Reached target swap.target. Dec 13 14:08:51.889577 systemd[1]: Reached target torcx.target. Dec 13 14:08:51.889588 systemd[1]: Reached target veritysetup.target. Dec 13 14:08:51.889598 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:08:51.889609 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:08:51.889619 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:08:51.889631 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:08:51.889642 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:08:51.889653 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:08:51.889663 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:08:51.889673 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:08:51.889687 systemd[1]: Mounting media.mount... Dec 13 14:08:51.889698 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:08:51.889709 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:08:51.889719 systemd[1]: Mounting tmp.mount... Dec 13 14:08:51.889730 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:08:51.889742 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:08:51.889767 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:08:51.889780 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:08:51.889794 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:08:51.889805 systemd[1]: Starting modprobe@drm.service... Dec 13 14:08:51.889816 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:08:51.889826 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:08:51.889836 systemd[1]: Starting modprobe@loop.service... Dec 13 14:08:51.889848 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:08:51.889859 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:08:51.889870 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:08:51.889887 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:08:51.889902 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:08:51.889912 systemd[1]: Stopped systemd-journald.service. Dec 13 14:08:51.889922 kernel: loop: module loaded Dec 13 14:08:51.889932 systemd[1]: Starting systemd-journald.service... Dec 13 14:08:51.889943 kernel: fuse: init (API version 7.34) Dec 13 14:08:51.889955 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:08:51.889968 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:08:51.889978 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:08:51.889988 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:08:51.889999 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:08:51.890009 systemd[1]: Stopped verity-setup.service. Dec 13 14:08:51.890021 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:08:51.890031 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:08:51.890070 systemd[1]: Mounted media.mount. Dec 13 14:08:51.890085 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:08:51.890098 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:08:51.890108 systemd[1]: Mounted tmp.mount. Dec 13 14:08:51.890119 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:08:51.890129 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:08:51.890140 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:08:51.890153 systemd-journald[999]: Journal started Dec 13 14:08:51.890198 systemd-journald[999]: Runtime Journal (/run/log/journal/3866698727d741a9b4e0783c32fecfd6) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:08:49.971000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:08:50.073000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:08:50.073000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:08:50.073000 audit: BPF prog-id=10 op=LOAD Dec 13 14:08:50.073000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:08:50.075000 audit: BPF prog-id=11 op=LOAD Dec 13 14:08:50.075000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:08:50.110000 audit[934]: AVC avc: denied { associate } for pid=934 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:08:50.110000 audit[934]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd8ac a1=400013ede0 a2=40001450c0 a3=32 items=0 ppid=917 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.110000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:08:50.111000 audit[934]: AVC avc: denied { associate } for pid=934 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:08:50.111000 audit[934]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001bd985 a2=1ed a3=0 items=2 ppid=917 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.111000 audit: CWD cwd="/" Dec 13 14:08:50.111000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.111000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.111000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:08:51.775000 audit: BPF prog-id=12 op=LOAD Dec 13 14:08:51.776000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:08:51.776000 audit: BPF prog-id=13 op=LOAD Dec 13 14:08:51.776000 audit: BPF prog-id=14 op=LOAD Dec 13 14:08:51.776000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:08:51.776000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:08:51.776000 audit: BPF prog-id=15 op=LOAD Dec 13 14:08:51.776000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:08:51.776000 audit: BPF prog-id=16 op=LOAD Dec 13 14:08:51.776000 audit: BPF prog-id=17 op=LOAD Dec 13 14:08:51.776000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:08:51.776000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:08:51.777000 audit: BPF prog-id=18 op=LOAD Dec 13 14:08:51.777000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:08:51.777000 audit: BPF prog-id=19 op=LOAD Dec 13 14:08:51.777000 audit: BPF prog-id=20 op=LOAD Dec 13 14:08:51.777000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:08:51.777000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:08:51.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.891037 systemd[1]: Started systemd-journald.service. Dec 13 14:08:51.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.787000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:08:51.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.864000 audit: BPF prog-id=21 op=LOAD Dec 13 14:08:51.865000 audit: BPF prog-id=22 op=LOAD Dec 13 14:08:51.865000 audit: BPF prog-id=23 op=LOAD Dec 13 14:08:51.865000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:08:51.865000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:08:51.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.887000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:08:51.887000 audit[999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe810db50 a2=4000 a3=1 items=0 ppid=1 pid=999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.887000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:08:51.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:50.109243 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:08:51.774711 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:08:51.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:50.109863 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:08:51.774722 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:08:50.109894 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:08:51.778726 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:08:50.109927 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:08:51.891847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:08:50.109936 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:08:51.892005 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:08:50.109967 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:08:50.109979 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:08:50.110162 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:08:50.110194 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:08:50.110205 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:08:51.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:50.110980 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:08:50.111012 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:08:50.111029 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:08:50.111042 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:08:50.111060 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:08:51.893106 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:08:50.111074 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:08:51.516080 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:51Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:08:51.893254 systemd[1]: Finished modprobe@drm.service. Dec 13 14:08:51.516345 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:51Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:08:51.516446 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:51Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:08:51.516611 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:51Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:08:51.516658 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:51Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:08:51.516713 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2024-12-13T14:08:51Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:08:51.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.894309 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:08:51.894458 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:08:51.895314 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:08:51.895450 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:08:51.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.896335 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:08:51.896488 systemd[1]: Finished modprobe@loop.service. Dec 13 14:08:51.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.897350 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:08:51.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.898253 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:08:51.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.899162 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:08:51.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.900156 systemd[1]: Reached target network-pre.target. Dec 13 14:08:51.901900 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:08:51.903658 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:08:51.904250 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:08:51.905679 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:08:51.907450 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:08:51.908206 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:08:51.909256 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:08:51.910044 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:08:51.911064 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:08:51.914564 systemd-journald[999]: Time spent on flushing to /var/log/journal/3866698727d741a9b4e0783c32fecfd6 is 21.077ms for 997 entries. Dec 13 14:08:51.914564 systemd-journald[999]: System Journal (/var/log/journal/3866698727d741a9b4e0783c32fecfd6) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:08:51.948198 systemd-journald[999]: Received client request to flush runtime journal. Dec 13 14:08:51.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.914335 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:08:51.915949 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:08:51.916692 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:08:51.949415 udevadm[1036]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:08:51.918544 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:08:51.926422 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:08:51.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:51.927220 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:08:51.928191 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:08:51.929954 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:08:51.932588 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:08:51.939574 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:08:51.949068 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:08:52.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.281775 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:08:52.282000 audit: BPF prog-id=24 op=LOAD Dec 13 14:08:52.282000 audit: BPF prog-id=25 op=LOAD Dec 13 14:08:52.282000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:08:52.282000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:08:52.283731 systemd[1]: Starting systemd-udevd.service... Dec 13 14:08:52.303829 systemd-udevd[1038]: Using default interface naming scheme 'v252'. Dec 13 14:08:52.314809 systemd[1]: Started systemd-udevd.service. Dec 13 14:08:52.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.315000 audit: BPF prog-id=26 op=LOAD Dec 13 14:08:52.317695 systemd[1]: Starting systemd-networkd.service... Dec 13 14:08:52.327000 audit: BPF prog-id=27 op=LOAD Dec 13 14:08:52.327000 audit: BPF prog-id=28 op=LOAD Dec 13 14:08:52.327000 audit: BPF prog-id=29 op=LOAD Dec 13 14:08:52.328593 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:08:52.340906 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Dec 13 14:08:52.356134 systemd[1]: Started systemd-userdbd.service. Dec 13 14:08:52.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.365785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:08:52.415082 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:08:52.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.416821 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:08:52.425468 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:08:52.425534 systemd-networkd[1047]: lo: Link UP Dec 13 14:08:52.425538 systemd-networkd[1047]: lo: Gained carrier Dec 13 14:08:52.425882 systemd-networkd[1047]: Enumeration completed Dec 13 14:08:52.425974 systemd[1]: Started systemd-networkd.service. Dec 13 14:08:52.425987 systemd-networkd[1047]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:08:52.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.427135 systemd-networkd[1047]: eth0: Link UP Dec 13 14:08:52.427145 systemd-networkd[1047]: eth0: Gained carrier Dec 13 14:08:52.453923 systemd-networkd[1047]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:08:52.454543 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:08:52.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.455315 systemd[1]: Reached target cryptsetup.target. Dec 13 14:08:52.456946 systemd[1]: Starting lvm2-activation.service... Dec 13 14:08:52.460368 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:08:52.485462 systemd[1]: Finished lvm2-activation.service. Dec 13 14:08:52.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.486196 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:08:52.486803 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:08:52.486833 systemd[1]: Reached target local-fs.target. Dec 13 14:08:52.487396 systemd[1]: Reached target machines.target. Dec 13 14:08:52.489024 systemd[1]: Starting ldconfig.service... Dec 13 14:08:52.489901 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:08:52.489955 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:08:52.490930 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:08:52.492496 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:08:52.494288 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:08:52.496710 systemd[1]: Starting systemd-sysext.service... Dec 13 14:08:52.498159 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) Dec 13 14:08:52.499312 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:08:52.503777 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:08:52.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.513369 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:08:52.519207 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:08:52.519379 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:08:52.563777 kernel: loop0: detected capacity change from 0 to 194096 Dec 13 14:08:52.565643 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:08:52.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.575956 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:08:52.580477 systemd-fsck[1083]: fsck.fat 4.2 (2021-01-31) Dec 13 14:08:52.580477 systemd-fsck[1083]: /dev/vda1: 236 files, 117175/258078 clusters Dec 13 14:08:52.585590 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:08:52.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.595814 kernel: loop1: detected capacity change from 0 to 194096 Dec 13 14:08:52.600742 (sd-sysext)[1088]: Using extensions 'kubernetes'. Dec 13 14:08:52.601125 (sd-sysext)[1088]: Merged extensions into '/usr'. Dec 13 14:08:52.617021 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:08:52.618364 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:08:52.620479 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:08:52.622339 systemd[1]: Starting modprobe@loop.service... Dec 13 14:08:52.623060 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:08:52.623184 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:08:52.623934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:08:52.624056 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:08:52.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.625140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:08:52.625246 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:08:52.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.626414 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:08:52.626515 systemd[1]: Finished modprobe@loop.service. Dec 13 14:08:52.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.627623 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:08:52.627719 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:08:52.658974 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:08:52.662616 systemd[1]: Finished ldconfig.service. Dec 13 14:08:52.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.879834 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:08:52.881549 systemd[1]: Mounting boot.mount... Dec 13 14:08:52.883288 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:08:52.888962 systemd[1]: Mounted boot.mount. Dec 13 14:08:52.889708 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:08:52.891402 systemd[1]: Finished systemd-sysext.service. Dec 13 14:08:52.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.893333 systemd[1]: Starting ensure-sysext.service... Dec 13 14:08:52.895069 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:08:52.896096 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:08:52.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:52.899817 systemd[1]: Reloading. Dec 13 14:08:52.906099 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:08:52.907726 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:08:52.910387 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:08:52.943928 /usr/lib/systemd/system-generators/torcx-generator[1116]: time="2024-12-13T14:08:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:08:52.944062 /usr/lib/systemd/system-generators/torcx-generator[1116]: time="2024-12-13T14:08:52Z" level=info msg="torcx already run" Dec 13 14:08:53.001820 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:08:53.001841 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:08:53.017476 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:08:53.057000 audit: BPF prog-id=30 op=LOAD Dec 13 14:08:53.057000 audit: BPF prog-id=31 op=LOAD Dec 13 14:08:53.057000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:08:53.057000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:08:53.057000 audit: BPF prog-id=32 op=LOAD Dec 13 14:08:53.057000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:08:53.057000 audit: BPF prog-id=33 op=LOAD Dec 13 14:08:53.057000 audit: BPF prog-id=34 op=LOAD Dec 13 14:08:53.057000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:08:53.057000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:08:53.059000 audit: BPF prog-id=35 op=LOAD Dec 13 14:08:53.059000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:08:53.061000 audit: BPF prog-id=36 op=LOAD Dec 13 14:08:53.061000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:08:53.061000 audit: BPF prog-id=37 op=LOAD Dec 13 14:08:53.061000 audit: BPF prog-id=38 op=LOAD Dec 13 14:08:53.061000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:08:53.061000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:08:53.063492 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:08:53.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.067427 systemd[1]: Starting audit-rules.service... Dec 13 14:08:53.069121 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:08:53.071105 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:08:53.074000 audit: BPF prog-id=39 op=LOAD Dec 13 14:08:53.076114 systemd[1]: Starting systemd-resolved.service... Dec 13 14:08:53.078000 audit: BPF prog-id=40 op=LOAD Dec 13 14:08:53.081697 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:08:53.083683 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:08:53.091082 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.092216 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:08:53.094837 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:08:53.094000 audit[1166]: SYSTEM_BOOT pid=1166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.096504 systemd[1]: Starting modprobe@loop.service... Dec 13 14:08:53.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.097180 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.097302 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:08:53.098101 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:08:53.099295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:08:53.099404 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:08:53.100487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:08:53.100593 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:08:53.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.101831 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:08:53.101943 systemd[1]: Finished modprobe@loop.service. Dec 13 14:08:53.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.104541 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:08:53.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.105818 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:08:53.105969 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.107282 systemd[1]: Starting systemd-update-done.service... Dec 13 14:08:53.108063 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:08:53.109970 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:08:53.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.111803 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.113015 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:08:53.114644 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:08:53.116572 systemd[1]: Starting modprobe@loop.service... Dec 13 14:08:53.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.117310 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.117489 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:08:53.117632 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:08:53.118530 systemd[1]: Finished systemd-update-done.service. Dec 13 14:08:53.119599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:08:53.119707 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:08:53.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.120726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:08:53.120850 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:08:53.121852 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:08:53.121968 systemd[1]: Finished modprobe@loop.service. Dec 13 14:08:53.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:53.124795 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.126024 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:08:53.127771 systemd[1]: Starting modprobe@drm.service... Dec 13 14:08:53.128000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:08:53.128000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd70b54f0 a2=420 a3=0 items=0 ppid=1155 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.128000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:08:53.129613 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:08:53.131049 augenrules[1181]: No rules Dec 13 14:08:53.131381 systemd[1]: Starting modprobe@loop.service... Dec 13 14:08:53.132047 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.132179 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:08:53.133322 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:08:53.134168 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:08:53.135195 systemd[1]: Finished audit-rules.service. Dec 13 14:08:53.136235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:08:53.136358 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:08:53.137312 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:08:53.137416 systemd[1]: Finished modprobe@drm.service. Dec 13 14:08:53.138376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:08:53.138485 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:08:53.139467 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:08:53.139574 systemd[1]: Finished modprobe@loop.service. Dec 13 14:08:53.140914 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:08:53.141007 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.142072 systemd[1]: Finished ensure-sysext.service. Dec 13 14:08:53.143350 systemd-resolved[1159]: Positive Trust Anchors: Dec 13 14:08:53.143361 systemd-resolved[1159]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:08:53.143389 systemd-resolved[1159]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:08:53.147207 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:08:53.610551 systemd-timesyncd[1163]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:08:53.610604 systemd-timesyncd[1163]: Initial clock synchronization to Fri 2024-12-13 14:08:53.610471 UTC. Dec 13 14:08:53.610867 systemd[1]: Reached target time-set.target. Dec 13 14:08:53.618145 systemd-resolved[1159]: Defaulting to hostname 'linux'. Dec 13 14:08:53.619537 systemd[1]: Started systemd-resolved.service. Dec 13 14:08:53.620169 systemd[1]: Reached target network.target. Dec 13 14:08:53.620746 systemd[1]: Reached target nss-lookup.target. Dec 13 14:08:53.621303 systemd[1]: Reached target sysinit.target. Dec 13 14:08:53.621994 systemd[1]: Started motdgen.path. Dec 13 14:08:53.622541 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:08:53.623482 systemd[1]: Started logrotate.timer. Dec 13 14:08:53.624092 systemd[1]: Started mdadm.timer. Dec 13 14:08:53.624600 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:08:53.625208 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:08:53.625239 systemd[1]: Reached target paths.target. Dec 13 14:08:53.625823 systemd[1]: Reached target timers.target. Dec 13 14:08:53.626661 systemd[1]: Listening on dbus.socket. Dec 13 14:08:53.628121 systemd[1]: Starting docker.socket... Dec 13 14:08:53.630981 systemd[1]: Listening on sshd.socket. Dec 13 14:08:53.631676 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:08:53.632089 systemd[1]: Listening on docker.socket. Dec 13 14:08:53.632738 systemd[1]: Reached target sockets.target. Dec 13 14:08:53.633283 systemd[1]: Reached target basic.target. Dec 13 14:08:53.633885 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.633916 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:08:53.634852 systemd[1]: Starting containerd.service... Dec 13 14:08:53.636328 systemd[1]: Starting dbus.service... Dec 13 14:08:53.637811 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:08:53.639510 systemd[1]: Starting extend-filesystems.service... Dec 13 14:08:53.640233 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:08:53.643584 systemd[1]: Starting motdgen.service... Dec 13 14:08:53.644692 jq[1197]: false Dec 13 14:08:53.645967 systemd[1]: Starting prepare-helm.service... Dec 13 14:08:53.647606 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:08:53.649329 systemd[1]: Starting sshd-keygen.service... Dec 13 14:08:53.652051 systemd[1]: Starting systemd-logind.service... Dec 13 14:08:53.652693 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:08:53.652775 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:08:53.653147 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:08:53.653796 systemd[1]: Starting update-engine.service... Dec 13 14:08:53.657471 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:08:53.659597 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:08:53.659765 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:08:53.660045 jq[1215]: true Dec 13 14:08:53.660796 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:08:53.660945 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:08:53.668179 jq[1218]: true Dec 13 14:08:53.674809 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:08:53.674986 systemd[1]: Finished motdgen.service. Dec 13 14:08:53.677768 dbus-daemon[1196]: [system] SELinux support is enabled Dec 13 14:08:53.677909 systemd[1]: Started dbus.service. Dec 13 14:08:53.680707 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:08:53.680738 systemd[1]: Reached target system-config.target. Dec 13 14:08:53.681409 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:08:53.681439 systemd[1]: Reached target user-config.target. Dec 13 14:08:53.685395 tar[1217]: linux-arm64/helm Dec 13 14:08:53.685590 extend-filesystems[1198]: Found loop1 Dec 13 14:08:53.685590 extend-filesystems[1198]: Found vda Dec 13 14:08:53.685590 extend-filesystems[1198]: Found vda1 Dec 13 14:08:53.685590 extend-filesystems[1198]: Found vda2 Dec 13 14:08:53.685590 extend-filesystems[1198]: Found vda3 Dec 13 14:08:53.685590 extend-filesystems[1198]: Found usr Dec 13 14:08:53.685590 extend-filesystems[1198]: Found vda4 Dec 13 14:08:53.685590 extend-filesystems[1198]: Found vda6 Dec 13 14:08:53.685590 extend-filesystems[1198]: Found vda7 Dec 13 14:08:53.685590 extend-filesystems[1198]: Found vda9 Dec 13 14:08:53.685590 extend-filesystems[1198]: Checking size of /dev/vda9 Dec 13 14:08:53.707460 extend-filesystems[1198]: Resized partition /dev/vda9 Dec 13 14:08:53.717327 extend-filesystems[1241]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:08:53.722454 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:08:53.737475 update_engine[1210]: I1213 14:08:53.736953 1210 main.cc:92] Flatcar Update Engine starting Dec 13 14:08:53.740301 systemd[1]: Started update-engine.service. Dec 13 14:08:53.743220 systemd[1]: Started locksmithd.service. Dec 13 14:08:53.744632 update_engine[1210]: I1213 14:08:53.740367 1210 update_check_scheduler.cc:74] Next update check in 7m53s Dec 13 14:08:53.745248 systemd-logind[1208]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:08:53.748105 systemd-logind[1208]: New seat seat0. Dec 13 14:08:53.753435 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:08:53.755261 systemd[1]: Started systemd-logind.service. Dec 13 14:08:53.769714 bash[1246]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:08:53.772871 extend-filesystems[1241]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:08:53.772871 extend-filesystems[1241]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:08:53.772871 extend-filesystems[1241]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:08:53.776596 extend-filesystems[1198]: Resized filesystem in /dev/vda9 Dec 13 14:08:53.774364 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:08:53.776004 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:08:53.776135 systemd[1]: Finished extend-filesystems.service. Dec 13 14:08:53.781663 env[1219]: time="2024-12-13T14:08:53.778863544Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:08:53.803052 env[1219]: time="2024-12-13T14:08:53.803011344Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:08:53.803272 env[1219]: time="2024-12-13T14:08:53.803251744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:08:53.804877 env[1219]: time="2024-12-13T14:08:53.804835744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:08:53.804954 env[1219]: time="2024-12-13T14:08:53.804878064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:08:53.805194 env[1219]: time="2024-12-13T14:08:53.805171624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:08:53.805229 env[1219]: time="2024-12-13T14:08:53.805193344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:08:53.805229 env[1219]: time="2024-12-13T14:08:53.805207024Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:08:53.805229 env[1219]: time="2024-12-13T14:08:53.805216464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:08:53.805313 env[1219]: time="2024-12-13T14:08:53.805296064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:08:53.805457 locksmithd[1248]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:08:53.805664 env[1219]: time="2024-12-13T14:08:53.805584344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:08:53.805722 env[1219]: time="2024-12-13T14:08:53.805702064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:08:53.805722 env[1219]: time="2024-12-13T14:08:53.805720504Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:08:53.805786 env[1219]: time="2024-12-13T14:08:53.805770864Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:08:53.805813 env[1219]: time="2024-12-13T14:08:53.805787624Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:08:53.811233 env[1219]: time="2024-12-13T14:08:53.811206584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:08:53.811353 env[1219]: time="2024-12-13T14:08:53.811337984Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:08:53.811457 env[1219]: time="2024-12-13T14:08:53.811440184Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:08:53.811557 env[1219]: time="2024-12-13T14:08:53.811540104Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.811631 env[1219]: time="2024-12-13T14:08:53.811617104Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.811702 env[1219]: time="2024-12-13T14:08:53.811688544Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.811761 env[1219]: time="2024-12-13T14:08:53.811748144Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.812167 env[1219]: time="2024-12-13T14:08:53.812138984Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.812251 env[1219]: time="2024-12-13T14:08:53.812234984Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.812311 env[1219]: time="2024-12-13T14:08:53.812296584Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.812372 env[1219]: time="2024-12-13T14:08:53.812357824Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.812451 env[1219]: time="2024-12-13T14:08:53.812436504Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:08:53.812631 env[1219]: time="2024-12-13T14:08:53.812610864Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:08:53.812792 env[1219]: time="2024-12-13T14:08:53.812773744Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:08:53.813095 env[1219]: time="2024-12-13T14:08:53.813072344Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:08:53.813187 env[1219]: time="2024-12-13T14:08:53.813171624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813247 env[1219]: time="2024-12-13T14:08:53.813232664Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:08:53.813429 env[1219]: time="2024-12-13T14:08:53.813412384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813495 env[1219]: time="2024-12-13T14:08:53.813481424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813565 env[1219]: time="2024-12-13T14:08:53.813551624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813638 env[1219]: time="2024-12-13T14:08:53.813624504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813695 env[1219]: time="2024-12-13T14:08:53.813681704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813754 env[1219]: time="2024-12-13T14:08:53.813739584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813811 env[1219]: time="2024-12-13T14:08:53.813797424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813869 env[1219]: time="2024-12-13T14:08:53.813854904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.813938 env[1219]: time="2024-12-13T14:08:53.813924984Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:08:53.814114 env[1219]: time="2024-12-13T14:08:53.814095704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.814185 env[1219]: time="2024-12-13T14:08:53.814169624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.814250 env[1219]: time="2024-12-13T14:08:53.814234344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.814308 env[1219]: time="2024-12-13T14:08:53.814293744Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:08:53.814379 env[1219]: time="2024-12-13T14:08:53.814362544Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:08:53.814467 env[1219]: time="2024-12-13T14:08:53.814452264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:08:53.814552 env[1219]: time="2024-12-13T14:08:53.814528304Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:08:53.814646 env[1219]: time="2024-12-13T14:08:53.814630224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:08:53.814929 env[1219]: time="2024-12-13T14:08:53.814878624Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:08:53.815553 env[1219]: time="2024-12-13T14:08:53.815246704Z" level=info msg="Connect containerd service" Dec 13 14:08:53.815553 env[1219]: time="2024-12-13T14:08:53.815339984Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:08:53.817405 env[1219]: time="2024-12-13T14:08:53.817341384Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:08:53.817659 env[1219]: time="2024-12-13T14:08:53.817629264Z" level=info msg="Start subscribing containerd event" Dec 13 14:08:53.817689 env[1219]: time="2024-12-13T14:08:53.817672264Z" level=info msg="Start recovering state" Dec 13 14:08:53.817737 env[1219]: time="2024-12-13T14:08:53.817725704Z" level=info msg="Start event monitor" Dec 13 14:08:53.817761 env[1219]: time="2024-12-13T14:08:53.817747744Z" level=info msg="Start snapshots syncer" Dec 13 14:08:53.817761 env[1219]: time="2024-12-13T14:08:53.817756864Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:08:53.817817 env[1219]: time="2024-12-13T14:08:53.817764984Z" level=info msg="Start streaming server" Dec 13 14:08:53.817950 env[1219]: time="2024-12-13T14:08:53.817932104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:08:53.817984 env[1219]: time="2024-12-13T14:08:53.817974184Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:08:53.818079 systemd[1]: Started containerd.service. Dec 13 14:08:53.818907 env[1219]: time="2024-12-13T14:08:53.818878664Z" level=info msg="containerd successfully booted in 0.053543s" Dec 13 14:08:54.068190 tar[1217]: linux-arm64/LICENSE Dec 13 14:08:54.068190 tar[1217]: linux-arm64/README.md Dec 13 14:08:54.073016 systemd[1]: Finished prepare-helm.service. Dec 13 14:08:54.286522 systemd-networkd[1047]: eth0: Gained IPv6LL Dec 13 14:08:54.288118 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:08:54.289129 systemd[1]: Reached target network-online.target. Dec 13 14:08:54.291358 systemd[1]: Starting kubelet.service... Dec 13 14:08:54.795713 systemd[1]: Started kubelet.service. Dec 13 14:08:55.260845 kubelet[1265]: E1213 14:08:55.260748 1265 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:08:55.262782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:08:55.262905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:08:56.873037 sshd_keygen[1220]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:08:56.891060 systemd[1]: Finished sshd-keygen.service. Dec 13 14:08:56.893129 systemd[1]: Starting issuegen.service... Dec 13 14:08:56.897509 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:08:56.897650 systemd[1]: Finished issuegen.service. Dec 13 14:08:56.899568 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:08:56.905307 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:08:56.907202 systemd[1]: Started getty@tty1.service. Dec 13 14:08:56.908964 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:08:56.909809 systemd[1]: Reached target getty.target. Dec 13 14:08:56.910467 systemd[1]: Reached target multi-user.target. Dec 13 14:08:56.912201 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:08:56.918296 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:08:56.918457 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:08:56.919243 systemd[1]: Startup finished in 543ms (kernel) + 4.368s (initrd) + 6.519s (userspace) = 11.431s. Dec 13 14:08:58.871504 systemd[1]: Created slice system-sshd.slice. Dec 13 14:08:58.873327 systemd[1]: Started sshd@0-10.0.0.75:22-10.0.0.1:44582.service. Dec 13 14:08:58.948194 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 44582 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:08:58.950301 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:08:58.961442 systemd[1]: Created slice user-500.slice. Dec 13 14:08:58.962742 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:08:58.964447 systemd-logind[1208]: New session 1 of user core. Dec 13 14:08:58.970382 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:08:58.971761 systemd[1]: Starting user@500.service... Dec 13 14:08:58.974304 (systemd)[1291]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:08:59.031898 systemd[1291]: Queued start job for default target default.target. Dec 13 14:08:59.032334 systemd[1291]: Reached target paths.target. Dec 13 14:08:59.032354 systemd[1291]: Reached target sockets.target. Dec 13 14:08:59.032364 systemd[1291]: Reached target timers.target. Dec 13 14:08:59.032375 systemd[1291]: Reached target basic.target. Dec 13 14:08:59.032450 systemd[1291]: Reached target default.target. Dec 13 14:08:59.032473 systemd[1291]: Startup finished in 52ms. Dec 13 14:08:59.032685 systemd[1]: Started user@500.service. Dec 13 14:08:59.033594 systemd[1]: Started session-1.scope. Dec 13 14:08:59.083265 systemd[1]: Started sshd@1-10.0.0.75:22-10.0.0.1:44590.service. Dec 13 14:08:59.115642 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 44590 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:08:59.117166 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:08:59.120801 systemd-logind[1208]: New session 2 of user core. Dec 13 14:08:59.121975 systemd[1]: Started session-2.scope. Dec 13 14:08:59.175435 sshd[1300]: pam_unix(sshd:session): session closed for user core Dec 13 14:08:59.179169 systemd[1]: Started sshd@2-10.0.0.75:22-10.0.0.1:44600.service. Dec 13 14:08:59.179663 systemd[1]: sshd@1-10.0.0.75:22-10.0.0.1:44590.service: Deactivated successfully. Dec 13 14:08:59.180289 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:08:59.180773 systemd-logind[1208]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:08:59.181771 systemd-logind[1208]: Removed session 2. Dec 13 14:08:59.212895 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 44600 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:08:59.214082 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:08:59.217593 systemd-logind[1208]: New session 3 of user core. Dec 13 14:08:59.218344 systemd[1]: Started session-3.scope. Dec 13 14:08:59.267694 sshd[1305]: pam_unix(sshd:session): session closed for user core Dec 13 14:08:59.270280 systemd[1]: sshd@2-10.0.0.75:22-10.0.0.1:44600.service: Deactivated successfully. Dec 13 14:08:59.270868 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:08:59.271431 systemd-logind[1208]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:08:59.272446 systemd[1]: Started sshd@3-10.0.0.75:22-10.0.0.1:44602.service. Dec 13 14:08:59.273122 systemd-logind[1208]: Removed session 3. Dec 13 14:08:59.304572 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 44602 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:08:59.305839 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:08:59.309208 systemd-logind[1208]: New session 4 of user core. Dec 13 14:08:59.310003 systemd[1]: Started session-4.scope. Dec 13 14:08:59.363264 sshd[1312]: pam_unix(sshd:session): session closed for user core Dec 13 14:08:59.367213 systemd[1]: sshd@3-10.0.0.75:22-10.0.0.1:44602.service: Deactivated successfully. Dec 13 14:08:59.367818 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:08:59.368353 systemd-logind[1208]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:08:59.369462 systemd[1]: Started sshd@4-10.0.0.75:22-10.0.0.1:44604.service. Dec 13 14:08:59.370185 systemd-logind[1208]: Removed session 4. Dec 13 14:08:59.402038 sshd[1318]: Accepted publickey for core from 10.0.0.1 port 44604 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:08:59.403489 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:08:59.406530 systemd-logind[1208]: New session 5 of user core. Dec 13 14:08:59.407313 systemd[1]: Started session-5.scope. Dec 13 14:08:59.468886 sudo[1321]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:08:59.469114 sudo[1321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:08:59.527740 systemd[1]: Starting docker.service... Dec 13 14:08:59.621269 env[1334]: time="2024-12-13T14:08:59.621214464Z" level=info msg="Starting up" Dec 13 14:08:59.622712 env[1334]: time="2024-12-13T14:08:59.622686584Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:08:59.622712 env[1334]: time="2024-12-13T14:08:59.622711184Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:08:59.622780 env[1334]: time="2024-12-13T14:08:59.622732424Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:08:59.622780 env[1334]: time="2024-12-13T14:08:59.622742624Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:08:59.624762 env[1334]: time="2024-12-13T14:08:59.624735504Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:08:59.624762 env[1334]: time="2024-12-13T14:08:59.624758224Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:08:59.624869 env[1334]: time="2024-12-13T14:08:59.624773024Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:08:59.624869 env[1334]: time="2024-12-13T14:08:59.624782424Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:08:59.740682 env[1334]: time="2024-12-13T14:08:59.740591824Z" level=info msg="Loading containers: start." Dec 13 14:08:59.864412 kernel: Initializing XFRM netlink socket Dec 13 14:08:59.886906 env[1334]: time="2024-12-13T14:08:59.886863464Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:08:59.939113 systemd-networkd[1047]: docker0: Link UP Dec 13 14:08:59.955532 env[1334]: time="2024-12-13T14:08:59.955485904Z" level=info msg="Loading containers: done." Dec 13 14:08:59.970427 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2511874003-merged.mount: Deactivated successfully. Dec 13 14:08:59.973839 env[1334]: time="2024-12-13T14:08:59.973792904Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:08:59.973987 env[1334]: time="2024-12-13T14:08:59.973965144Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:08:59.974074 env[1334]: time="2024-12-13T14:08:59.974060904Z" level=info msg="Daemon has completed initialization" Dec 13 14:08:59.988621 systemd[1]: Started docker.service. Dec 13 14:08:59.995672 env[1334]: time="2024-12-13T14:08:59.995339424Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:09:01.222792 env[1219]: time="2024-12-13T14:09:01.222558544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 14:09:01.924099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761729386.mount: Deactivated successfully. Dec 13 14:09:03.641633 env[1219]: time="2024-12-13T14:09:03.641588624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:03.643142 env[1219]: time="2024-12-13T14:09:03.643106224Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:03.644783 env[1219]: time="2024-12-13T14:09:03.644741264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:03.646379 env[1219]: time="2024-12-13T14:09:03.646352144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:03.647339 env[1219]: time="2024-12-13T14:09:03.647311104Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 14:09:03.655992 env[1219]: time="2024-12-13T14:09:03.655966424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 14:09:05.433669 env[1219]: time="2024-12-13T14:09:05.433614824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:05.435637 env[1219]: time="2024-12-13T14:09:05.435605144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:05.437909 env[1219]: time="2024-12-13T14:09:05.437882264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:05.440429 env[1219]: time="2024-12-13T14:09:05.440403064Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:05.441310 env[1219]: time="2024-12-13T14:09:05.441284784Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 14:09:05.451680 env[1219]: time="2024-12-13T14:09:05.451653544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 14:09:05.513691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:09:05.513858 systemd[1]: Stopped kubelet.service. Dec 13 14:09:05.515247 systemd[1]: Starting kubelet.service... Dec 13 14:09:05.597231 systemd[1]: Started kubelet.service. Dec 13 14:09:05.687719 kubelet[1489]: E1213 14:09:05.687178 1489 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:05.690378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:05.690528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:06.740694 env[1219]: time="2024-12-13T14:09:06.740650264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:06.742067 env[1219]: time="2024-12-13T14:09:06.742036104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:06.743790 env[1219]: time="2024-12-13T14:09:06.743750424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:06.745660 env[1219]: time="2024-12-13T14:09:06.745631864Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:06.746493 env[1219]: time="2024-12-13T14:09:06.746456424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 14:09:06.757459 env[1219]: time="2024-12-13T14:09:06.757425664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:09:07.830800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790708261.mount: Deactivated successfully. Dec 13 14:09:08.384235 env[1219]: time="2024-12-13T14:09:08.384185584Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:08.385730 env[1219]: time="2024-12-13T14:09:08.385697544Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:08.386994 env[1219]: time="2024-12-13T14:09:08.386959344Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:08.388021 env[1219]: time="2024-12-13T14:09:08.387985424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:08.388371 env[1219]: time="2024-12-13T14:09:08.388331384Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 14:09:08.397191 env[1219]: time="2024-12-13T14:09:08.397164544Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:09:08.911118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810163141.mount: Deactivated successfully. Dec 13 14:09:09.692035 env[1219]: time="2024-12-13T14:09:09.691983744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:09.693546 env[1219]: time="2024-12-13T14:09:09.693510424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:09.695751 env[1219]: time="2024-12-13T14:09:09.695723624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:09.697223 env[1219]: time="2024-12-13T14:09:09.697198304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:09.698851 env[1219]: time="2024-12-13T14:09:09.698805264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:09:09.712709 env[1219]: time="2024-12-13T14:09:09.712672024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:09:10.137856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount361894628.mount: Deactivated successfully. Dec 13 14:09:10.141598 env[1219]: time="2024-12-13T14:09:10.141563024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:10.142826 env[1219]: time="2024-12-13T14:09:10.142785264Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:10.144941 env[1219]: time="2024-12-13T14:09:10.144904384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:10.146310 env[1219]: time="2024-12-13T14:09:10.146272824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:10.146884 env[1219]: time="2024-12-13T14:09:10.146856864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:09:10.155537 env[1219]: time="2024-12-13T14:09:10.155507744Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 14:09:10.708938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524085193.mount: Deactivated successfully. Dec 13 14:09:13.060248 env[1219]: time="2024-12-13T14:09:13.060200824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:13.061734 env[1219]: time="2024-12-13T14:09:13.061706104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:13.064329 env[1219]: time="2024-12-13T14:09:13.064301024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:13.066097 env[1219]: time="2024-12-13T14:09:13.066065024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:13.067082 env[1219]: time="2024-12-13T14:09:13.067036064Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 14:09:15.941328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:09:15.941537 systemd[1]: Stopped kubelet.service. Dec 13 14:09:15.942869 systemd[1]: Starting kubelet.service... Dec 13 14:09:16.021632 systemd[1]: Started kubelet.service. Dec 13 14:09:16.058354 kubelet[1606]: E1213 14:09:16.058304 1606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:09:16.060444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:09:16.060572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:09:17.866316 systemd[1]: Stopped kubelet.service. Dec 13 14:09:17.868256 systemd[1]: Starting kubelet.service... Dec 13 14:09:17.883624 systemd[1]: Reloading. Dec 13 14:09:17.941367 /usr/lib/systemd/system-generators/torcx-generator[1640]: time="2024-12-13T14:09:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:09:17.943516 /usr/lib/systemd/system-generators/torcx-generator[1640]: time="2024-12-13T14:09:17Z" level=info msg="torcx already run" Dec 13 14:09:18.074601 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:09:18.074619 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:09:18.090017 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:09:18.153420 systemd[1]: Started kubelet.service. Dec 13 14:09:18.155250 systemd[1]: Stopping kubelet.service... Dec 13 14:09:18.155733 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:09:18.155903 systemd[1]: Stopped kubelet.service. Dec 13 14:09:18.157671 systemd[1]: Starting kubelet.service... Dec 13 14:09:18.240913 systemd[1]: Started kubelet.service. Dec 13 14:09:18.287988 kubelet[1684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:09:18.288274 kubelet[1684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:09:18.288319 kubelet[1684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:09:18.288535 kubelet[1684]: I1213 14:09:18.288506 1684 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:09:19.550451 kubelet[1684]: I1213 14:09:19.550406 1684 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:09:19.550451 kubelet[1684]: I1213 14:09:19.550438 1684 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:09:19.551618 kubelet[1684]: I1213 14:09:19.551596 1684 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:09:19.576728 kubelet[1684]: E1213 14:09:19.576692 1684 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.576728 kubelet[1684]: I1213 14:09:19.576703 1684 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:09:19.588461 kubelet[1684]: I1213 14:09:19.588426 1684 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:09:19.589635 kubelet[1684]: I1213 14:09:19.589593 1684 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:09:19.589786 kubelet[1684]: I1213 14:09:19.589632 1684 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:09:19.589861 kubelet[1684]: I1213 14:09:19.589854 1684 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:09:19.589893 kubelet[1684]: I1213 14:09:19.589863 1684 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:09:19.590103 kubelet[1684]: I1213 14:09:19.590091 1684 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:09:19.592703 kubelet[1684]: I1213 14:09:19.592687 1684 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:09:19.592757 kubelet[1684]: I1213 14:09:19.592709 1684 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:09:19.592805 kubelet[1684]: I1213 14:09:19.592792 1684 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:09:19.592959 kubelet[1684]: I1213 14:09:19.592867 1684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:09:19.593739 kubelet[1684]: W1213 14:09:19.593569 1684 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.593739 kubelet[1684]: E1213 14:09:19.593623 1684 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.593968 kubelet[1684]: W1213 14:09:19.593934 1684 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.594056 kubelet[1684]: E1213 14:09:19.594045 1684 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.594115 kubelet[1684]: I1213 14:09:19.593991 1684 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:09:19.594462 kubelet[1684]: I1213 14:09:19.594448 1684 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:09:19.594561 kubelet[1684]: W1213 14:09:19.594550 1684 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:09:19.595287 kubelet[1684]: I1213 14:09:19.595272 1684 server.go:1264] "Started kubelet" Dec 13 14:09:19.595530 kubelet[1684]: I1213 14:09:19.595501 1684 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:09:19.595923 kubelet[1684]: I1213 14:09:19.595880 1684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:09:19.596183 kubelet[1684]: I1213 14:09:19.596164 1684 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:09:19.612494 kubelet[1684]: I1213 14:09:19.612468 1684 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:09:19.613355 kubelet[1684]: E1213 14:09:19.613137 1684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c1d0ec0c9920 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:09:19.595247904 +0000 UTC m=+1.349755121,LastTimestamp:2024-12-13 14:09:19.595247904 +0000 UTC m=+1.349755121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:09:19.617613 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:09:19.617681 kubelet[1684]: I1213 14:09:19.615688 1684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:09:19.619662 kubelet[1684]: E1213 14:09:19.619641 1684 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:09:19.621878 kubelet[1684]: I1213 14:09:19.620532 1684 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:09:19.624268 kubelet[1684]: E1213 14:09:19.621670 1684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="200ms" Dec 13 14:09:19.624268 kubelet[1684]: I1213 14:09:19.620556 1684 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:09:19.624268 kubelet[1684]: E1213 14:09:19.623734 1684 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:09:19.624409 kubelet[1684]: I1213 14:09:19.623925 1684 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:09:19.624409 kubelet[1684]: I1213 14:09:19.624288 1684 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:09:19.624409 kubelet[1684]: W1213 14:09:19.623993 1684 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.624409 kubelet[1684]: I1213 14:09:19.624354 1684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:09:19.624409 kubelet[1684]: E1213 14:09:19.624365 1684 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.624758 kubelet[1684]: I1213 14:09:19.624740 1684 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:09:19.632146 kubelet[1684]: I1213 14:09:19.632121 1684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:09:19.633097 kubelet[1684]: I1213 14:09:19.633081 1684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:09:19.633342 kubelet[1684]: I1213 14:09:19.633331 1684 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:09:19.633432 kubelet[1684]: I1213 14:09:19.633421 1684 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:09:19.633542 kubelet[1684]: E1213 14:09:19.633519 1684 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:09:19.636870 kubelet[1684]: W1213 14:09:19.636827 1684 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.636870 kubelet[1684]: E1213 14:09:19.636864 1684 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:19.637249 kubelet[1684]: I1213 14:09:19.637224 1684 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:09:19.637249 kubelet[1684]: I1213 14:09:19.637238 1684 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:09:19.637320 kubelet[1684]: I1213 14:09:19.637253 1684 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:09:19.718558 kubelet[1684]: I1213 14:09:19.718528 1684 policy_none.go:49] "None policy: Start" Dec 13 14:09:19.719335 kubelet[1684]: I1213 14:09:19.719310 1684 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:09:19.719491 kubelet[1684]: I1213 14:09:19.719478 1684 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:09:19.726313 kubelet[1684]: I1213 14:09:19.724251 1684 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:09:19.728375 systemd[1]: Created slice kubepods.slice. Dec 13 14:09:19.728896 kubelet[1684]: E1213 14:09:19.728863 1684 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Dec 13 14:09:19.734636 kubelet[1684]: E1213 14:09:19.734609 1684 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:09:19.736805 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:09:19.739050 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:09:19.749363 kubelet[1684]: I1213 14:09:19.749344 1684 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:09:19.749684 kubelet[1684]: I1213 14:09:19.749649 1684 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:09:19.749952 kubelet[1684]: I1213 14:09:19.749939 1684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:09:19.750823 kubelet[1684]: E1213 14:09:19.750805 1684 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 14:09:19.826432 kubelet[1684]: E1213 14:09:19.825627 1684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="400ms" Dec 13 14:09:19.930102 kubelet[1684]: I1213 14:09:19.930081 1684 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:09:19.930491 kubelet[1684]: E1213 14:09:19.930456 1684 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Dec 13 14:09:19.935552 kubelet[1684]: I1213 14:09:19.935523 1684 topology_manager.go:215] "Topology Admit Handler" podUID="0aa1603b315819214c160d2971efd237" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:09:19.936434 kubelet[1684]: I1213 14:09:19.936387 1684 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:09:19.937613 kubelet[1684]: I1213 14:09:19.937583 1684 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:09:19.942257 systemd[1]: Created slice kubepods-burstable-pod0aa1603b315819214c160d2971efd237.slice. Dec 13 14:09:19.964646 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 14:09:19.981134 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 14:09:20.127304 kubelet[1684]: I1213 14:09:20.126909 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aa1603b315819214c160d2971efd237-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aa1603b315819214c160d2971efd237\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:09:20.127603 kubelet[1684]: I1213 14:09:20.127578 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:20.127710 kubelet[1684]: I1213 14:09:20.127694 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:20.127818 kubelet[1684]: I1213 14:09:20.127804 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:20.127900 kubelet[1684]: I1213 14:09:20.127885 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:20.127966 kubelet[1684]: I1213 14:09:20.127953 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aa1603b315819214c160d2971efd237-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aa1603b315819214c160d2971efd237\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:09:20.128050 kubelet[1684]: I1213 14:09:20.128036 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aa1603b315819214c160d2971efd237-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0aa1603b315819214c160d2971efd237\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:09:20.128132 kubelet[1684]: I1213 14:09:20.128118 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:20.128208 kubelet[1684]: I1213 14:09:20.128194 1684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:09:20.227120 kubelet[1684]: E1213 14:09:20.227077 1684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="800ms" Dec 13 14:09:20.261929 kubelet[1684]: E1213 14:09:20.261878 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:20.262569 env[1219]: time="2024-12-13T14:09:20.262522664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0aa1603b315819214c160d2971efd237,Namespace:kube-system,Attempt:0,}" Dec 13 14:09:20.278892 kubelet[1684]: E1213 14:09:20.278869 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:20.279264 env[1219]: time="2024-12-13T14:09:20.279214624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 14:09:20.283622 kubelet[1684]: E1213 14:09:20.283601 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:20.284040 env[1219]: time="2024-12-13T14:09:20.283984824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 14:09:20.332353 kubelet[1684]: I1213 14:09:20.332328 1684 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:09:20.332724 kubelet[1684]: E1213 14:09:20.332702 1684 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Dec 13 14:09:20.631366 kubelet[1684]: W1213 14:09:20.631323 1684 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:20.631366 kubelet[1684]: E1213 14:09:20.631366 1684 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:20.631680 kubelet[1684]: W1213 14:09:20.631346 1684 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:20.631680 kubelet[1684]: E1213 14:09:20.631413 1684 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:20.716119 kubelet[1684]: W1213 14:09:20.716055 1684 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:20.716119 kubelet[1684]: E1213 14:09:20.716118 1684 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:20.766987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683011633.mount: Deactivated successfully. Dec 13 14:09:20.771669 env[1219]: time="2024-12-13T14:09:20.771628304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.772918 env[1219]: time="2024-12-13T14:09:20.772872984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.774455 env[1219]: time="2024-12-13T14:09:20.774384824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.775155 env[1219]: time="2024-12-13T14:09:20.775123824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.776848 env[1219]: time="2024-12-13T14:09:20.776806824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.778374 env[1219]: time="2024-12-13T14:09:20.778346744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.780826 env[1219]: time="2024-12-13T14:09:20.780797104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.783767 env[1219]: time="2024-12-13T14:09:20.783740024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.785146 env[1219]: time="2024-12-13T14:09:20.785117704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.785870 env[1219]: time="2024-12-13T14:09:20.785846024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.786662 env[1219]: time="2024-12-13T14:09:20.786636624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.787794 env[1219]: time="2024-12-13T14:09:20.787756664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:20.815250 env[1219]: time="2024-12-13T14:09:20.814722024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:20.815250 env[1219]: time="2024-12-13T14:09:20.814759704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:20.815250 env[1219]: time="2024-12-13T14:09:20.814770464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:20.815587 env[1219]: time="2024-12-13T14:09:20.815496424Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3d973e14c7c95f2067c18ca2ad348cdab81e0239601cba249cdbabb8832d449 pid=1735 runtime=io.containerd.runc.v2 Dec 13 14:09:20.816018 env[1219]: time="2024-12-13T14:09:20.815969944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:20.816076 env[1219]: time="2024-12-13T14:09:20.816037184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:20.816076 env[1219]: time="2024-12-13T14:09:20.816063224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:20.816276 env[1219]: time="2024-12-13T14:09:20.816214504Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7be81686596472df9d33ad6db9de52b31fb79e39fed0d0b22dc4d37632831fe pid=1740 runtime=io.containerd.runc.v2 Dec 13 14:09:20.817926 env[1219]: time="2024-12-13T14:09:20.817778624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:20.817926 env[1219]: time="2024-12-13T14:09:20.817809944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:20.817926 env[1219]: time="2024-12-13T14:09:20.817819744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:20.818059 env[1219]: time="2024-12-13T14:09:20.817977344Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/634a15694b3b15ae525a08400a9ed7d5041be8ce4a9c06ba99cb3797454c3ca2 pid=1739 runtime=io.containerd.runc.v2 Dec 13 14:09:20.829199 systemd[1]: Started cri-containerd-a3d973e14c7c95f2067c18ca2ad348cdab81e0239601cba249cdbabb8832d449.scope. Dec 13 14:09:20.832827 systemd[1]: Started cri-containerd-634a15694b3b15ae525a08400a9ed7d5041be8ce4a9c06ba99cb3797454c3ca2.scope. Dec 13 14:09:20.839950 systemd[1]: Started cri-containerd-f7be81686596472df9d33ad6db9de52b31fb79e39fed0d0b22dc4d37632831fe.scope. Dec 13 14:09:20.898627 env[1219]: time="2024-12-13T14:09:20.897406824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3d973e14c7c95f2067c18ca2ad348cdab81e0239601cba249cdbabb8832d449\"" Dec 13 14:09:20.898783 kubelet[1684]: E1213 14:09:20.898739 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:20.899489 env[1219]: time="2024-12-13T14:09:20.899449144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0aa1603b315819214c160d2971efd237,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7be81686596472df9d33ad6db9de52b31fb79e39fed0d0b22dc4d37632831fe\"" Dec 13 14:09:20.900098 kubelet[1684]: E1213 14:09:20.900081 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:20.902561 env[1219]: time="2024-12-13T14:09:20.902528424Z" level=info msg="CreateContainer within sandbox \"a3d973e14c7c95f2067c18ca2ad348cdab81e0239601cba249cdbabb8832d449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:09:20.902636 env[1219]: time="2024-12-13T14:09:20.902586584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"634a15694b3b15ae525a08400a9ed7d5041be8ce4a9c06ba99cb3797454c3ca2\"" Dec 13 14:09:20.903030 kubelet[1684]: E1213 14:09:20.903011 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:20.903160 env[1219]: time="2024-12-13T14:09:20.903131224Z" level=info msg="CreateContainer within sandbox \"f7be81686596472df9d33ad6db9de52b31fb79e39fed0d0b22dc4d37632831fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:09:20.906081 env[1219]: time="2024-12-13T14:09:20.906034304Z" level=info msg="CreateContainer within sandbox \"634a15694b3b15ae525a08400a9ed7d5041be8ce4a9c06ba99cb3797454c3ca2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:09:20.924195 env[1219]: time="2024-12-13T14:09:20.924147104Z" level=info msg="CreateContainer within sandbox \"634a15694b3b15ae525a08400a9ed7d5041be8ce4a9c06ba99cb3797454c3ca2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d68694073d2da425a88da09508cbd77b3b66f28b6057068dbdadbbe45326c8ef\"" Dec 13 14:09:20.924873 env[1219]: time="2024-12-13T14:09:20.924830344Z" level=info msg="StartContainer for \"d68694073d2da425a88da09508cbd77b3b66f28b6057068dbdadbbe45326c8ef\"" Dec 13 14:09:20.924873 env[1219]: time="2024-12-13T14:09:20.924802864Z" level=info msg="CreateContainer within sandbox \"f7be81686596472df9d33ad6db9de52b31fb79e39fed0d0b22dc4d37632831fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca58eef20f73be57641b7b24dbf13ac0e9b2e2e9049dad1676ba02894d2c77fc\"" Dec 13 14:09:20.925240 env[1219]: time="2024-12-13T14:09:20.925215344Z" level=info msg="StartContainer for \"ca58eef20f73be57641b7b24dbf13ac0e9b2e2e9049dad1676ba02894d2c77fc\"" Dec 13 14:09:20.925963 env[1219]: time="2024-12-13T14:09:20.925931664Z" level=info msg="CreateContainer within sandbox \"a3d973e14c7c95f2067c18ca2ad348cdab81e0239601cba249cdbabb8832d449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"066995844218a30b01d019c7a9c40ba8da9180433b49c0525500039d092193cc\"" Dec 13 14:09:20.926300 env[1219]: time="2024-12-13T14:09:20.926261824Z" level=info msg="StartContainer for \"066995844218a30b01d019c7a9c40ba8da9180433b49c0525500039d092193cc\"" Dec 13 14:09:20.941281 systemd[1]: Started cri-containerd-ca58eef20f73be57641b7b24dbf13ac0e9b2e2e9049dad1676ba02894d2c77fc.scope. Dec 13 14:09:20.946911 systemd[1]: Started cri-containerd-d68694073d2da425a88da09508cbd77b3b66f28b6057068dbdadbbe45326c8ef.scope. Dec 13 14:09:20.959749 systemd[1]: Started cri-containerd-066995844218a30b01d019c7a9c40ba8da9180433b49c0525500039d092193cc.scope. Dec 13 14:09:21.028959 env[1219]: time="2024-12-13T14:09:21.028904624Z" level=info msg="StartContainer for \"ca58eef20f73be57641b7b24dbf13ac0e9b2e2e9049dad1676ba02894d2c77fc\" returns successfully" Dec 13 14:09:21.031368 kubelet[1684]: E1213 14:09:21.031132 1684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="1.6s" Dec 13 14:09:21.076026 env[1219]: time="2024-12-13T14:09:21.075825144Z" level=info msg="StartContainer for \"066995844218a30b01d019c7a9c40ba8da9180433b49c0525500039d092193cc\" returns successfully" Dec 13 14:09:21.095612 env[1219]: time="2024-12-13T14:09:21.095502624Z" level=info msg="StartContainer for \"d68694073d2da425a88da09508cbd77b3b66f28b6057068dbdadbbe45326c8ef\" returns successfully" Dec 13 14:09:21.134536 kubelet[1684]: I1213 14:09:21.134232 1684 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:09:21.134670 kubelet[1684]: E1213 14:09:21.134583 1684 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Dec 13 14:09:21.183728 kubelet[1684]: W1213 14:09:21.183548 1684 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:21.183728 kubelet[1684]: E1213 14:09:21.183614 1684 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Dec 13 14:09:21.642918 kubelet[1684]: E1213 14:09:21.642815 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:21.643737 kubelet[1684]: E1213 14:09:21.643715 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:21.645616 kubelet[1684]: E1213 14:09:21.645593 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:22.634816 kubelet[1684]: E1213 14:09:22.634777 1684 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 14:09:22.647193 kubelet[1684]: E1213 14:09:22.647157 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:22.705010 kubelet[1684]: E1213 14:09:22.704972 1684 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 14:09:22.736521 kubelet[1684]: I1213 14:09:22.736489 1684 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:09:22.744890 kubelet[1684]: I1213 14:09:22.744857 1684 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:09:22.754050 kubelet[1684]: E1213 14:09:22.753998 1684 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:09:22.854802 kubelet[1684]: E1213 14:09:22.854749 1684 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:09:22.955794 kubelet[1684]: E1213 14:09:22.955679 1684 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:09:23.056479 kubelet[1684]: E1213 14:09:23.056427 1684 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:09:23.595465 kubelet[1684]: I1213 14:09:23.595418 1684 apiserver.go:52] "Watching apiserver" Dec 13 14:09:23.625355 kubelet[1684]: I1213 14:09:23.625308 1684 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:09:23.699630 kubelet[1684]: E1213 14:09:23.699600 1684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:24.194638 systemd[1]: Reloading. Dec 13 14:09:24.263995 /usr/lib/systemd/system-generators/torcx-generator[1987]: time="2024-12-13T14:09:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:09:24.264346 /usr/lib/systemd/system-generators/torcx-generator[1987]: time="2024-12-13T14:09:24Z" level=info msg="torcx already run" Dec 13 14:09:24.318045 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:09:24.318067 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:09:24.333908 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:09:24.412971 kubelet[1684]: I1213 14:09:24.412910 1684 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:09:24.413084 systemd[1]: Stopping kubelet.service... Dec 13 14:09:24.429807 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:09:24.429989 systemd[1]: Stopped kubelet.service. Dec 13 14:09:24.430036 systemd[1]: kubelet.service: Consumed 1.675s CPU time. Dec 13 14:09:24.431523 systemd[1]: Starting kubelet.service... Dec 13 14:09:24.513142 systemd[1]: Started kubelet.service. Dec 13 14:09:24.551741 kubelet[2030]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:09:24.551741 kubelet[2030]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:09:24.551741 kubelet[2030]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:09:24.552087 kubelet[2030]: I1213 14:09:24.551798 2030 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:09:24.556588 kubelet[2030]: I1213 14:09:24.556555 2030 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:09:24.556588 kubelet[2030]: I1213 14:09:24.556580 2030 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:09:24.556746 kubelet[2030]: I1213 14:09:24.556730 2030 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:09:24.558003 kubelet[2030]: I1213 14:09:24.557982 2030 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:09:24.559054 kubelet[2030]: I1213 14:09:24.559029 2030 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:09:24.564608 kubelet[2030]: I1213 14:09:24.564574 2030 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:09:24.564772 kubelet[2030]: I1213 14:09:24.564736 2030 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:09:24.564931 kubelet[2030]: I1213 14:09:24.564765 2030 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:09:24.564931 kubelet[2030]: I1213 14:09:24.564929 2030 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:09:24.565022 kubelet[2030]: I1213 14:09:24.564937 2030 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:09:24.565022 kubelet[2030]: I1213 14:09:24.564969 2030 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:09:24.565071 kubelet[2030]: I1213 14:09:24.565049 2030 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:09:24.565071 kubelet[2030]: I1213 14:09:24.565061 2030 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:09:24.565116 kubelet[2030]: I1213 14:09:24.565083 2030 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:09:24.565116 kubelet[2030]: I1213 14:09:24.565095 2030 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:09:24.565900 kubelet[2030]: I1213 14:09:24.565884 2030 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:09:24.566170 kubelet[2030]: I1213 14:09:24.566155 2030 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:09:24.570809 kubelet[2030]: I1213 14:09:24.567913 2030 server.go:1264] "Started kubelet" Dec 13 14:09:24.570809 kubelet[2030]: I1213 14:09:24.570591 2030 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:09:24.572633 kubelet[2030]: I1213 14:09:24.572615 2030 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:09:24.572792 kubelet[2030]: I1213 14:09:24.572761 2030 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:09:24.574019 kubelet[2030]: I1213 14:09:24.574002 2030 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:09:24.575274 kubelet[2030]: I1213 14:09:24.575232 2030 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:09:24.577296 kubelet[2030]: I1213 14:09:24.577267 2030 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:09:24.577622 kubelet[2030]: I1213 14:09:24.577600 2030 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:09:24.578943 kubelet[2030]: I1213 14:09:24.578923 2030 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:09:24.584281 kubelet[2030]: I1213 14:09:24.584263 2030 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:09:24.590832 kubelet[2030]: E1213 14:09:24.588850 2030 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:09:24.594734 kubelet[2030]: I1213 14:09:24.593663 2030 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:09:24.601173 kubelet[2030]: I1213 14:09:24.601149 2030 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:09:24.608353 kubelet[2030]: I1213 14:09:24.608318 2030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:09:24.609335 kubelet[2030]: I1213 14:09:24.609307 2030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:09:24.609437 kubelet[2030]: I1213 14:09:24.609343 2030 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:09:24.609437 kubelet[2030]: I1213 14:09:24.609364 2030 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:09:24.609437 kubelet[2030]: E1213 14:09:24.609419 2030 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:09:24.641319 kubelet[2030]: I1213 14:09:24.641294 2030 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:09:24.641496 kubelet[2030]: I1213 14:09:24.641479 2030 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:09:24.641561 kubelet[2030]: I1213 14:09:24.641552 2030 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:09:24.641977 kubelet[2030]: I1213 14:09:24.641762 2030 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:09:24.642099 kubelet[2030]: I1213 14:09:24.642068 2030 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:09:24.642235 kubelet[2030]: I1213 14:09:24.642201 2030 policy_none.go:49] "None policy: Start" Dec 13 14:09:24.643198 kubelet[2030]: I1213 14:09:24.643177 2030 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:09:24.643357 kubelet[2030]: I1213 14:09:24.643346 2030 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:09:24.643666 kubelet[2030]: I1213 14:09:24.643648 2030 state_mem.go:75] "Updated machine memory state" Dec 13 14:09:24.650120 kubelet[2030]: I1213 14:09:24.650100 2030 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:09:24.650548 kubelet[2030]: I1213 14:09:24.650463 2030 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:09:24.650840 kubelet[2030]: I1213 14:09:24.650825 2030 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:09:24.680381 kubelet[2030]: I1213 14:09:24.680344 2030 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:09:24.686890 kubelet[2030]: I1213 14:09:24.686863 2030 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 14:09:24.686973 kubelet[2030]: I1213 14:09:24.686932 2030 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:09:24.710754 kubelet[2030]: I1213 14:09:24.710366 2030 topology_manager.go:215] "Topology Admit Handler" podUID="0aa1603b315819214c160d2971efd237" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:09:24.710754 kubelet[2030]: I1213 14:09:24.710520 2030 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:09:24.710754 kubelet[2030]: I1213 14:09:24.710559 2030 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:09:24.715922 kubelet[2030]: E1213 14:09:24.715891 2030 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 14:09:24.780385 kubelet[2030]: I1213 14:09:24.780255 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aa1603b315819214c160d2971efd237-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aa1603b315819214c160d2971efd237\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:09:24.780385 kubelet[2030]: I1213 14:09:24.780313 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aa1603b315819214c160d2971efd237-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aa1603b315819214c160d2971efd237\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:09:24.780385 kubelet[2030]: I1213 14:09:24.780351 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:24.780556 kubelet[2030]: I1213 14:09:24.780415 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:24.780556 kubelet[2030]: I1213 14:09:24.780440 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aa1603b315819214c160d2971efd237-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0aa1603b315819214c160d2971efd237\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:09:24.780556 kubelet[2030]: I1213 14:09:24.780458 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:24.780556 kubelet[2030]: I1213 14:09:24.780472 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:24.780556 kubelet[2030]: I1213 14:09:24.780488 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:09:24.780675 kubelet[2030]: I1213 14:09:24.780507 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:09:25.016238 kubelet[2030]: E1213 14:09:25.016202 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:25.016672 kubelet[2030]: E1213 14:09:25.016638 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:25.016766 kubelet[2030]: E1213 14:09:25.016741 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:25.201765 sudo[2064]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:09:25.201993 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:09:25.566246 kubelet[2030]: I1213 14:09:25.566140 2030 apiserver.go:52] "Watching apiserver" Dec 13 14:09:25.578787 kubelet[2030]: I1213 14:09:25.578748 2030 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:09:25.626179 kubelet[2030]: E1213 14:09:25.626151 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:25.626351 kubelet[2030]: E1213 14:09:25.626321 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:25.631844 kubelet[2030]: E1213 14:09:25.631814 2030 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 14:09:25.632848 kubelet[2030]: E1213 14:09:25.632829 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:25.650566 kubelet[2030]: I1213 14:09:25.650494 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6504754240000001 podStartE2EDuration="1.650475424s" podCreationTimestamp="2024-12-13 14:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:09:25.643152624 +0000 UTC m=+1.126187001" watchObservedRunningTime="2024-12-13 14:09:25.650475424 +0000 UTC m=+1.133509801" Dec 13 14:09:25.657487 sudo[2064]: pam_unix(sudo:session): session closed for user root Dec 13 14:09:25.659838 kubelet[2030]: I1213 14:09:25.659793 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.659777504 podStartE2EDuration="2.659777504s" podCreationTimestamp="2024-12-13 14:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:09:25.652075904 +0000 UTC m=+1.135110241" watchObservedRunningTime="2024-12-13 14:09:25.659777504 +0000 UTC m=+1.142811881" Dec 13 14:09:25.659970 kubelet[2030]: I1213 14:09:25.659899 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.659894304 podStartE2EDuration="1.659894304s" podCreationTimestamp="2024-12-13 14:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:09:25.658930824 +0000 UTC m=+1.141965161" watchObservedRunningTime="2024-12-13 14:09:25.659894304 +0000 UTC m=+1.142928681" Dec 13 14:09:26.627346 kubelet[2030]: E1213 14:09:26.627295 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:27.257144 sudo[1321]: pam_unix(sudo:session): session closed for user root Dec 13 14:09:27.258512 sshd[1318]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:27.261178 systemd[1]: sshd@4-10.0.0.75:22-10.0.0.1:44604.service: Deactivated successfully. Dec 13 14:09:27.261959 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:09:27.262110 systemd[1]: session-5.scope: Consumed 6.863s CPU time. Dec 13 14:09:27.262523 systemd-logind[1208]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:09:27.263187 systemd-logind[1208]: Removed session 5. Dec 13 14:09:29.437618 kubelet[2030]: E1213 14:09:29.437554 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:32.180008 kubelet[2030]: E1213 14:09:32.179899 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:32.635733 kubelet[2030]: E1213 14:09:32.635637 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:33.053519 kubelet[2030]: E1213 14:09:33.053414 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:33.636637 kubelet[2030]: E1213 14:09:33.636607 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:37.960543 kubelet[2030]: I1213 14:09:37.960511 2030 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:09:37.960902 env[1219]: time="2024-12-13T14:09:37.960843046Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:09:37.961280 kubelet[2030]: I1213 14:09:37.961259 2030 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:09:38.164011 kubelet[2030]: I1213 14:09:38.163968 2030 topology_manager.go:215] "Topology Admit Handler" podUID="ac09343b-df0b-464d-a24d-530597423c67" podNamespace="kube-system" podName="kube-proxy-gs2wt" Dec 13 14:09:38.169188 kubelet[2030]: I1213 14:09:38.169154 2030 topology_manager.go:215] "Topology Admit Handler" podUID="ae133714-0435-4788-b05d-6f6c02453ab1" podNamespace="kube-system" podName="cilium-gt865" Dec 13 14:09:38.169793 systemd[1]: Created slice kubepods-besteffort-podac09343b_df0b_464d_a24d_530597423c67.slice. Dec 13 14:09:38.177502 systemd[1]: Created slice kubepods-burstable-podae133714_0435_4788_b05d_6f6c02453ab1.slice. Dec 13 14:09:38.275560 kubelet[2030]: I1213 14:09:38.275468 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-hostproc\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.275718 kubelet[2030]: I1213 14:09:38.275698 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfdg\" (UniqueName: \"kubernetes.io/projected/ae133714-0435-4788-b05d-6f6c02453ab1-kube-api-access-dmfdg\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.275795 kubelet[2030]: I1213 14:09:38.275780 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae133714-0435-4788-b05d-6f6c02453ab1-hubble-tls\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.275866 kubelet[2030]: I1213 14:09:38.275854 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cni-path\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.275943 kubelet[2030]: I1213 14:09:38.275931 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-host-proc-sys-net\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276017 kubelet[2030]: I1213 14:09:38.276004 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-host-proc-sys-kernel\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276089 kubelet[2030]: I1213 14:09:38.276076 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac09343b-df0b-464d-a24d-530597423c67-xtables-lock\") pod \"kube-proxy-gs2wt\" (UID: \"ac09343b-df0b-464d-a24d-530597423c67\") " pod="kube-system/kube-proxy-gs2wt" Dec 13 14:09:38.276170 kubelet[2030]: I1213 14:09:38.276155 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-cgroup\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276239 kubelet[2030]: I1213 14:09:38.276226 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae133714-0435-4788-b05d-6f6c02453ab1-clustermesh-secrets\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276311 kubelet[2030]: I1213 14:09:38.276298 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-xtables-lock\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276405 kubelet[2030]: I1213 14:09:38.276378 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-config-path\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276505 kubelet[2030]: I1213 14:09:38.276491 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac09343b-df0b-464d-a24d-530597423c67-lib-modules\") pod \"kube-proxy-gs2wt\" (UID: \"ac09343b-df0b-464d-a24d-530597423c67\") " pod="kube-system/kube-proxy-gs2wt" Dec 13 14:09:38.276627 kubelet[2030]: I1213 14:09:38.276581 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-etc-cni-netd\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276627 kubelet[2030]: I1213 14:09:38.276622 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-run\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276701 kubelet[2030]: I1213 14:09:38.276644 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-bpf-maps\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276701 kubelet[2030]: I1213 14:09:38.276670 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-lib-modules\") pod \"cilium-gt865\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " pod="kube-system/cilium-gt865" Dec 13 14:09:38.276749 kubelet[2030]: I1213 14:09:38.276704 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac09343b-df0b-464d-a24d-530597423c67-kube-proxy\") pod \"kube-proxy-gs2wt\" (UID: \"ac09343b-df0b-464d-a24d-530597423c67\") " pod="kube-system/kube-proxy-gs2wt" Dec 13 14:09:38.276749 kubelet[2030]: I1213 14:09:38.276735 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqr9t\" (UniqueName: \"kubernetes.io/projected/ac09343b-df0b-464d-a24d-530597423c67-kube-api-access-bqr9t\") pod \"kube-proxy-gs2wt\" (UID: \"ac09343b-df0b-464d-a24d-530597423c67\") " pod="kube-system/kube-proxy-gs2wt" Dec 13 14:09:38.476366 kubelet[2030]: E1213 14:09:38.476331 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:38.476816 env[1219]: time="2024-12-13T14:09:38.476777920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gs2wt,Uid:ac09343b-df0b-464d-a24d-530597423c67,Namespace:kube-system,Attempt:0,}" Dec 13 14:09:38.480199 kubelet[2030]: E1213 14:09:38.479594 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:38.481110 env[1219]: time="2024-12-13T14:09:38.479928124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gt865,Uid:ae133714-0435-4788-b05d-6f6c02453ab1,Namespace:kube-system,Attempt:0,}" Dec 13 14:09:38.499624 env[1219]: time="2024-12-13T14:09:38.499441188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:38.499624 env[1219]: time="2024-12-13T14:09:38.499478748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:38.499624 env[1219]: time="2024-12-13T14:09:38.499488588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:38.499624 env[1219]: time="2024-12-13T14:09:38.499596908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a2ad08e8f81b7aa139e2077f3f9ad4fd7b463e7944d99a5b3939b87a816b255 pid=2126 runtime=io.containerd.runc.v2 Dec 13 14:09:38.501999 env[1219]: time="2024-12-13T14:09:38.501927991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:38.501999 env[1219]: time="2024-12-13T14:09:38.501963631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:38.501999 env[1219]: time="2024-12-13T14:09:38.501973711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:38.502578 env[1219]: time="2024-12-13T14:09:38.502482952Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5 pid=2135 runtime=io.containerd.runc.v2 Dec 13 14:09:38.509543 systemd[1]: Started cri-containerd-5a2ad08e8f81b7aa139e2077f3f9ad4fd7b463e7944d99a5b3939b87a816b255.scope. Dec 13 14:09:38.522990 systemd[1]: Started cri-containerd-3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5.scope. Dec 13 14:09:38.559518 env[1219]: time="2024-12-13T14:09:38.558975461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gs2wt,Uid:ac09343b-df0b-464d-a24d-530597423c67,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a2ad08e8f81b7aa139e2077f3f9ad4fd7b463e7944d99a5b3939b87a816b255\"" Dec 13 14:09:38.559837 kubelet[2030]: E1213 14:09:38.559813 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:38.565824 env[1219]: time="2024-12-13T14:09:38.565776989Z" level=info msg="CreateContainer within sandbox \"5a2ad08e8f81b7aa139e2077f3f9ad4fd7b463e7944d99a5b3939b87a816b255\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:09:38.568599 env[1219]: time="2024-12-13T14:09:38.568564913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gt865,Uid:ae133714-0435-4788-b05d-6f6c02453ab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\"" Dec 13 14:09:38.569200 kubelet[2030]: E1213 14:09:38.569169 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:38.570181 env[1219]: time="2024-12-13T14:09:38.570135395Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:09:38.581143 env[1219]: time="2024-12-13T14:09:38.581110328Z" level=info msg="CreateContainer within sandbox \"5a2ad08e8f81b7aa139e2077f3f9ad4fd7b463e7944d99a5b3939b87a816b255\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f0189d4253c7d67f2d81b4415d9593cf440c7f9a5d7e6fbdc8e4099058134d0\"" Dec 13 14:09:38.581781 env[1219]: time="2024-12-13T14:09:38.581752289Z" level=info msg="StartContainer for \"9f0189d4253c7d67f2d81b4415d9593cf440c7f9a5d7e6fbdc8e4099058134d0\"" Dec 13 14:09:38.596283 systemd[1]: Started cri-containerd-9f0189d4253c7d67f2d81b4415d9593cf440c7f9a5d7e6fbdc8e4099058134d0.scope. Dec 13 14:09:38.640087 env[1219]: time="2024-12-13T14:09:38.640038840Z" level=info msg="StartContainer for \"9f0189d4253c7d67f2d81b4415d9593cf440c7f9a5d7e6fbdc8e4099058134d0\" returns successfully" Dec 13 14:09:38.656691 kubelet[2030]: E1213 14:09:38.655145 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:38.966831 kubelet[2030]: I1213 14:09:38.966760 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gs2wt" podStartSLOduration=0.966734159 podStartE2EDuration="966.734159ms" podCreationTimestamp="2024-12-13 14:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:09:38.669656476 +0000 UTC m=+14.152690853" watchObservedRunningTime="2024-12-13 14:09:38.966734159 +0000 UTC m=+14.449768536" Dec 13 14:09:38.967185 kubelet[2030]: I1213 14:09:38.967081 2030 topology_manager.go:215] "Topology Admit Handler" podUID="dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b" podNamespace="kube-system" podName="cilium-operator-599987898-zcxp9" Dec 13 14:09:38.973500 systemd[1]: Created slice kubepods-besteffort-poddbbb2e72_4a59_4b3a_b5cb_1dce4c5d701b.slice. Dec 13 14:09:38.981371 kubelet[2030]: I1213 14:09:38.981283 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h86tp\" (UniqueName: \"kubernetes.io/projected/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b-kube-api-access-h86tp\") pod \"cilium-operator-599987898-zcxp9\" (UID: \"dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b\") " pod="kube-system/cilium-operator-599987898-zcxp9" Dec 13 14:09:38.981371 kubelet[2030]: I1213 14:09:38.981325 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b-cilium-config-path\") pod \"cilium-operator-599987898-zcxp9\" (UID: \"dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b\") " pod="kube-system/cilium-operator-599987898-zcxp9" Dec 13 14:09:39.276514 kubelet[2030]: E1213 14:09:39.276063 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:39.276982 env[1219]: time="2024-12-13T14:09:39.276946918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zcxp9,Uid:dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b,Namespace:kube-system,Attempt:0,}" Dec 13 14:09:39.292939 env[1219]: time="2024-12-13T14:09:39.292720856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:39.292939 env[1219]: time="2024-12-13T14:09:39.292911336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:39.293124 env[1219]: time="2024-12-13T14:09:39.292922456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:39.293331 env[1219]: time="2024-12-13T14:09:39.293296736Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2 pid=2363 runtime=io.containerd.runc.v2 Dec 13 14:09:39.302761 systemd[1]: Started cri-containerd-35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2.scope. Dec 13 14:09:39.335108 env[1219]: time="2024-12-13T14:09:39.335067784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zcxp9,Uid:dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b,Namespace:kube-system,Attempt:0,} returns sandbox id \"35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2\"" Dec 13 14:09:39.336181 kubelet[2030]: E1213 14:09:39.335967 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:39.444840 kubelet[2030]: E1213 14:09:39.444804 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:39.487749 update_engine[1210]: I1213 14:09:39.487686 1210 update_attempter.cc:509] Updating boot flags... Dec 13 14:09:43.234219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544869302.mount: Deactivated successfully. Dec 13 14:09:45.397606 env[1219]: time="2024-12-13T14:09:45.397532118Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:45.399670 env[1219]: time="2024-12-13T14:09:45.399637000Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:45.401232 env[1219]: time="2024-12-13T14:09:45.401196721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:45.401820 env[1219]: time="2024-12-13T14:09:45.401793201Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:09:45.404706 env[1219]: time="2024-12-13T14:09:45.404670484Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:09:45.408632 env[1219]: time="2024-12-13T14:09:45.408602607Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:09:45.418217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851676330.mount: Deactivated successfully. Dec 13 14:09:45.421638 env[1219]: time="2024-12-13T14:09:45.421605137Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\"" Dec 13 14:09:45.423105 env[1219]: time="2024-12-13T14:09:45.422184377Z" level=info msg="StartContainer for \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\"" Dec 13 14:09:45.443349 systemd[1]: Started cri-containerd-d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232.scope. Dec 13 14:09:45.477681 env[1219]: time="2024-12-13T14:09:45.477633260Z" level=info msg="StartContainer for \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\" returns successfully" Dec 13 14:09:45.520475 systemd[1]: cri-containerd-d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232.scope: Deactivated successfully. Dec 13 14:09:45.672910 env[1219]: time="2024-12-13T14:09:45.672859092Z" level=info msg="shim disconnected" id=d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232 Dec 13 14:09:45.672910 env[1219]: time="2024-12-13T14:09:45.672904852Z" level=warning msg="cleaning up after shim disconnected" id=d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232 namespace=k8s.io Dec 13 14:09:45.672910 env[1219]: time="2024-12-13T14:09:45.672914932Z" level=info msg="cleaning up dead shim" Dec 13 14:09:45.678726 kubelet[2030]: E1213 14:09:45.678695 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:45.687498 env[1219]: time="2024-12-13T14:09:45.687452744Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:09:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\n" Dec 13 14:09:46.415738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232-rootfs.mount: Deactivated successfully. Dec 13 14:09:46.683907 kubelet[2030]: E1213 14:09:46.683816 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:46.687943 env[1219]: time="2024-12-13T14:09:46.687903129Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:09:46.701802 env[1219]: time="2024-12-13T14:09:46.701750339Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\"" Dec 13 14:09:46.703307 env[1219]: time="2024-12-13T14:09:46.703279780Z" level=info msg="StartContainer for \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\"" Dec 13 14:09:46.718979 systemd[1]: Started cri-containerd-f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6.scope. Dec 13 14:09:46.752992 env[1219]: time="2024-12-13T14:09:46.752945496Z" level=info msg="StartContainer for \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\" returns successfully" Dec 13 14:09:46.760900 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:09:46.761127 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:09:46.761493 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:09:46.762897 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:09:46.763123 systemd[1]: cri-containerd-f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6.scope: Deactivated successfully. Dec 13 14:09:46.771859 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:09:46.783835 env[1219]: time="2024-12-13T14:09:46.783789599Z" level=info msg="shim disconnected" id=f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6 Dec 13 14:09:46.783835 env[1219]: time="2024-12-13T14:09:46.783833359Z" level=warning msg="cleaning up after shim disconnected" id=f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6 namespace=k8s.io Dec 13 14:09:46.784002 env[1219]: time="2024-12-13T14:09:46.783842799Z" level=info msg="cleaning up dead shim" Dec 13 14:09:46.789617 env[1219]: time="2024-12-13T14:09:46.789580763Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:09:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2523 runtime=io.containerd.runc.v2\n" Dec 13 14:09:47.415538 systemd[1]: run-containerd-runc-k8s.io-f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6-runc.m4wVie.mount: Deactivated successfully. Dec 13 14:09:47.415647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6-rootfs.mount: Deactivated successfully. Dec 13 14:09:47.684412 kubelet[2030]: E1213 14:09:47.684293 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:47.695276 env[1219]: time="2024-12-13T14:09:47.695223512Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:09:47.700903 env[1219]: time="2024-12-13T14:09:47.700861436Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:47.703365 env[1219]: time="2024-12-13T14:09:47.703326238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:47.708533 env[1219]: time="2024-12-13T14:09:47.708501241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:47.708894 env[1219]: time="2024-12-13T14:09:47.708862722Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:09:47.712950 env[1219]: time="2024-12-13T14:09:47.712852804Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\"" Dec 13 14:09:47.713472 env[1219]: time="2024-12-13T14:09:47.713038645Z" level=info msg="CreateContainer within sandbox \"35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:09:47.716339 env[1219]: time="2024-12-13T14:09:47.716311807Z" level=info msg="StartContainer for \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\"" Dec 13 14:09:47.738960 systemd[1]: Started cri-containerd-acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234.scope. Dec 13 14:09:47.809420 env[1219]: time="2024-12-13T14:09:47.809340950Z" level=info msg="CreateContainer within sandbox \"35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\"" Dec 13 14:09:47.810111 env[1219]: time="2024-12-13T14:09:47.810082431Z" level=info msg="StartContainer for \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\"" Dec 13 14:09:47.825903 env[1219]: time="2024-12-13T14:09:47.820299598Z" level=info msg="StartContainer for \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\" returns successfully" Dec 13 14:09:47.830617 systemd[1]: Started cri-containerd-23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea.scope. Dec 13 14:09:47.837137 systemd[1]: cri-containerd-acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234.scope: Deactivated successfully. Dec 13 14:09:47.864718 env[1219]: time="2024-12-13T14:09:47.864674308Z" level=info msg="shim disconnected" id=acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234 Dec 13 14:09:47.864718 env[1219]: time="2024-12-13T14:09:47.864718908Z" level=warning msg="cleaning up after shim disconnected" id=acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234 namespace=k8s.io Dec 13 14:09:47.864995 env[1219]: time="2024-12-13T14:09:47.864728268Z" level=info msg="cleaning up dead shim" Dec 13 14:09:47.871230 env[1219]: time="2024-12-13T14:09:47.871134313Z" level=info msg="StartContainer for \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\" returns successfully" Dec 13 14:09:47.872311 env[1219]: time="2024-12-13T14:09:47.872268433Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:09:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2613 runtime=io.containerd.runc.v2\n" Dec 13 14:09:48.416214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983331588.mount: Deactivated successfully. Dec 13 14:09:48.416303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234-rootfs.mount: Deactivated successfully. Dec 13 14:09:48.688255 kubelet[2030]: E1213 14:09:48.687822 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:48.689905 kubelet[2030]: E1213 14:09:48.689746 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:48.691581 env[1219]: time="2024-12-13T14:09:48.691526764Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:09:48.704299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568817519.mount: Deactivated successfully. Dec 13 14:09:48.711120 env[1219]: time="2024-12-13T14:09:48.711067537Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\"" Dec 13 14:09:48.711575 env[1219]: time="2024-12-13T14:09:48.711547897Z" level=info msg="StartContainer for \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\"" Dec 13 14:09:48.721942 kubelet[2030]: I1213 14:09:48.721865 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-zcxp9" podStartSLOduration=2.348765728 podStartE2EDuration="10.721846264s" podCreationTimestamp="2024-12-13 14:09:38 +0000 UTC" firstStartedPulling="2024-12-13 14:09:39.337089187 +0000 UTC m=+14.820123564" lastFinishedPulling="2024-12-13 14:09:47.710169723 +0000 UTC m=+23.193204100" observedRunningTime="2024-12-13 14:09:48.698381009 +0000 UTC m=+24.181415346" watchObservedRunningTime="2024-12-13 14:09:48.721846264 +0000 UTC m=+24.204880641" Dec 13 14:09:48.730584 systemd[1]: Started cri-containerd-b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9.scope. Dec 13 14:09:48.771904 systemd[1]: cri-containerd-b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9.scope: Deactivated successfully. Dec 13 14:09:48.773696 env[1219]: time="2024-12-13T14:09:48.773634097Z" level=info msg="StartContainer for \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\" returns successfully" Dec 13 14:09:48.793073 env[1219]: time="2024-12-13T14:09:48.793029469Z" level=info msg="shim disconnected" id=b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9 Dec 13 14:09:48.793073 env[1219]: time="2024-12-13T14:09:48.793071469Z" level=warning msg="cleaning up after shim disconnected" id=b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9 namespace=k8s.io Dec 13 14:09:48.793268 env[1219]: time="2024-12-13T14:09:48.793082589Z" level=info msg="cleaning up dead shim" Dec 13 14:09:48.800527 env[1219]: time="2024-12-13T14:09:48.800492754Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:09:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2675 runtime=io.containerd.runc.v2\n" Dec 13 14:09:49.693592 kubelet[2030]: E1213 14:09:49.693549 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:49.693944 kubelet[2030]: E1213 14:09:49.693840 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:49.696223 env[1219]: time="2024-12-13T14:09:49.696171461Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:09:49.709149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444348355.mount: Deactivated successfully. Dec 13 14:09:49.718038 env[1219]: time="2024-12-13T14:09:49.717983874Z" level=info msg="CreateContainer within sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\"" Dec 13 14:09:49.719119 env[1219]: time="2024-12-13T14:09:49.719091594Z" level=info msg="StartContainer for \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\"" Dec 13 14:09:49.736199 systemd[1]: Started cri-containerd-f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4.scope. Dec 13 14:09:49.785270 env[1219]: time="2024-12-13T14:09:49.785226274Z" level=info msg="StartContainer for \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\" returns successfully" Dec 13 14:09:49.926439 kubelet[2030]: I1213 14:09:49.926369 2030 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:09:49.943456 kubelet[2030]: I1213 14:09:49.943417 2030 topology_manager.go:215] "Topology Admit Handler" podUID="419829f1-ed47-4324-b124-640d478d1fa4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qcdls" Dec 13 14:09:49.946178 kubelet[2030]: I1213 14:09:49.946087 2030 topology_manager.go:215] "Topology Admit Handler" podUID="6bd2d005-bfa8-497c-9e1b-5b1a8a49e8f2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jhj4k" Dec 13 14:09:49.951360 systemd[1]: Created slice kubepods-burstable-pod419829f1_ed47_4324_b124_640d478d1fa4.slice. Dec 13 14:09:49.955307 systemd[1]: Created slice kubepods-burstable-pod6bd2d005_bfa8_497c_9e1b_5b1a8a49e8f2.slice. Dec 13 14:09:50.042426 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:09:50.061889 kubelet[2030]: I1213 14:09:50.061852 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fqn\" (UniqueName: \"kubernetes.io/projected/6bd2d005-bfa8-497c-9e1b-5b1a8a49e8f2-kube-api-access-g9fqn\") pod \"coredns-7db6d8ff4d-jhj4k\" (UID: \"6bd2d005-bfa8-497c-9e1b-5b1a8a49e8f2\") " pod="kube-system/coredns-7db6d8ff4d-jhj4k" Dec 13 14:09:50.061889 kubelet[2030]: I1213 14:09:50.061894 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/419829f1-ed47-4324-b124-640d478d1fa4-config-volume\") pod \"coredns-7db6d8ff4d-qcdls\" (UID: \"419829f1-ed47-4324-b124-640d478d1fa4\") " pod="kube-system/coredns-7db6d8ff4d-qcdls" Dec 13 14:09:50.062034 kubelet[2030]: I1213 14:09:50.061928 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddtk\" (UniqueName: \"kubernetes.io/projected/419829f1-ed47-4324-b124-640d478d1fa4-kube-api-access-zddtk\") pod \"coredns-7db6d8ff4d-qcdls\" (UID: \"419829f1-ed47-4324-b124-640d478d1fa4\") " pod="kube-system/coredns-7db6d8ff4d-qcdls" Dec 13 14:09:50.062034 kubelet[2030]: I1213 14:09:50.061947 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd2d005-bfa8-497c-9e1b-5b1a8a49e8f2-config-volume\") pod \"coredns-7db6d8ff4d-jhj4k\" (UID: \"6bd2d005-bfa8-497c-9e1b-5b1a8a49e8f2\") " pod="kube-system/coredns-7db6d8ff4d-jhj4k" Dec 13 14:09:50.254359 kubelet[2030]: E1213 14:09:50.254134 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:50.254835 env[1219]: time="2024-12-13T14:09:50.254782867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qcdls,Uid:419829f1-ed47-4324-b124-640d478d1fa4,Namespace:kube-system,Attempt:0,}" Dec 13 14:09:50.258016 kubelet[2030]: E1213 14:09:50.257987 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:50.258480 env[1219]: time="2024-12-13T14:09:50.258442589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jhj4k,Uid:6bd2d005-bfa8-497c-9e1b-5b1a8a49e8f2,Namespace:kube-system,Attempt:0,}" Dec 13 14:09:50.312525 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:09:50.697992 kubelet[2030]: E1213 14:09:50.697951 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:50.718207 kubelet[2030]: I1213 14:09:50.718136 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gt865" podStartSLOduration=5.883768799 podStartE2EDuration="12.718110968s" podCreationTimestamp="2024-12-13 14:09:38 +0000 UTC" firstStartedPulling="2024-12-13 14:09:38.569704834 +0000 UTC m=+14.052739171" lastFinishedPulling="2024-12-13 14:09:45.404046963 +0000 UTC m=+20.887081340" observedRunningTime="2024-12-13 14:09:50.717526808 +0000 UTC m=+26.200561185" watchObservedRunningTime="2024-12-13 14:09:50.718110968 +0000 UTC m=+26.201145305" Dec 13 14:09:51.699150 kubelet[2030]: E1213 14:09:51.699118 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:51.879867 systemd[1]: Started sshd@5-10.0.0.75:22-10.0.0.1:50380.service. Dec 13 14:09:51.922902 sshd[2853]: Accepted publickey for core from 10.0.0.1 port 50380 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:09:51.923658 sshd[2853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:09:51.927124 systemd-logind[1208]: New session 6 of user core. Dec 13 14:09:51.927979 systemd[1]: Started session-6.scope. Dec 13 14:09:51.934584 systemd-networkd[1047]: cilium_host: Link UP Dec 13 14:09:51.936409 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:09:51.936464 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:09:51.936450 systemd-networkd[1047]: cilium_net: Link UP Dec 13 14:09:51.936624 systemd-networkd[1047]: cilium_net: Gained carrier Dec 13 14:09:51.936742 systemd-networkd[1047]: cilium_host: Gained carrier Dec 13 14:09:52.024039 systemd-networkd[1047]: cilium_vxlan: Link UP Dec 13 14:09:52.024045 systemd-networkd[1047]: cilium_vxlan: Gained carrier Dec 13 14:09:52.067168 sshd[2853]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:52.069659 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:09:52.070227 systemd-logind[1208]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:09:52.070315 systemd[1]: sshd@5-10.0.0.75:22-10.0.0.1:50380.service: Deactivated successfully. Dec 13 14:09:52.071315 systemd-logind[1208]: Removed session 6. Dec 13 14:09:52.238598 systemd-networkd[1047]: cilium_net: Gained IPv6LL Dec 13 14:09:52.333425 kernel: NET: Registered PF_ALG protocol family Dec 13 14:09:52.701085 kubelet[2030]: E1213 14:09:52.701035 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:52.847569 systemd-networkd[1047]: cilium_host: Gained IPv6LL Dec 13 14:09:52.922775 systemd-networkd[1047]: lxc_health: Link UP Dec 13 14:09:52.936985 systemd-networkd[1047]: lxc_health: Gained carrier Dec 13 14:09:52.937550 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:09:53.341428 systemd-networkd[1047]: lxcf3baed4ba65b: Link UP Dec 13 14:09:53.341554 systemd-networkd[1047]: lxc9c4f86c1b318: Link UP Dec 13 14:09:53.353853 kernel: eth0: renamed from tmp50bb8 Dec 13 14:09:53.359815 systemd-networkd[1047]: lxc9c4f86c1b318: Gained carrier Dec 13 14:09:53.360717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9c4f86c1b318: link becomes ready Dec 13 14:09:53.372515 kernel: eth0: renamed from tmp22648 Dec 13 14:09:53.376704 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf3baed4ba65b: link becomes ready Dec 13 14:09:53.375646 systemd-networkd[1047]: lxcf3baed4ba65b: Gained carrier Dec 13 14:09:53.678568 systemd-networkd[1047]: cilium_vxlan: Gained IPv6LL Dec 13 14:09:54.446581 systemd-networkd[1047]: lxc_health: Gained IPv6LL Dec 13 14:09:54.481639 kubelet[2030]: E1213 14:09:54.481598 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:54.510578 systemd-networkd[1047]: lxcf3baed4ba65b: Gained IPv6LL Dec 13 14:09:54.766564 systemd-networkd[1047]: lxc9c4f86c1b318: Gained IPv6LL Dec 13 14:09:56.027565 kubelet[2030]: I1213 14:09:56.027527 2030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:09:56.028236 kubelet[2030]: E1213 14:09:56.028218 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:56.713749 kubelet[2030]: E1213 14:09:56.713712 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:56.888193 env[1219]: time="2024-12-13T14:09:56.888123118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:56.888618 env[1219]: time="2024-12-13T14:09:56.888167398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:56.888618 env[1219]: time="2024-12-13T14:09:56.888178038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:56.888712 env[1219]: time="2024-12-13T14:09:56.888621959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50bb827847bac36bc2c4f4c3cb7d08fa8a921b440d0845b73866bb9ac05525b9 pid=3261 runtime=io.containerd.runc.v2 Dec 13 14:09:56.893433 env[1219]: time="2024-12-13T14:09:56.893333720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:56.893433 env[1219]: time="2024-12-13T14:09:56.893385960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:56.893433 env[1219]: time="2024-12-13T14:09:56.893420080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:56.893730 env[1219]: time="2024-12-13T14:09:56.893656601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22648c9aca6097aada61ee6f23dbe9edef94ab57a0d22c3458df0e47cf6bec2a pid=3270 runtime=io.containerd.runc.v2 Dec 13 14:09:56.903818 systemd[1]: Started cri-containerd-50bb827847bac36bc2c4f4c3cb7d08fa8a921b440d0845b73866bb9ac05525b9.scope. Dec 13 14:09:56.913467 systemd[1]: Started cri-containerd-22648c9aca6097aada61ee6f23dbe9edef94ab57a0d22c3458df0e47cf6bec2a.scope. Dec 13 14:09:56.957052 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:09:56.960574 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:09:56.974995 env[1219]: time="2024-12-13T14:09:56.973888471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qcdls,Uid:419829f1-ed47-4324-b124-640d478d1fa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"22648c9aca6097aada61ee6f23dbe9edef94ab57a0d22c3458df0e47cf6bec2a\"" Dec 13 14:09:56.975114 kubelet[2030]: E1213 14:09:56.974685 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:56.978167 env[1219]: time="2024-12-13T14:09:56.978116353Z" level=info msg="CreateContainer within sandbox \"22648c9aca6097aada61ee6f23dbe9edef94ab57a0d22c3458df0e47cf6bec2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:09:56.983284 env[1219]: time="2024-12-13T14:09:56.982992715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jhj4k,Uid:6bd2d005-bfa8-497c-9e1b-5b1a8a49e8f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"50bb827847bac36bc2c4f4c3cb7d08fa8a921b440d0845b73866bb9ac05525b9\"" Dec 13 14:09:56.983609 kubelet[2030]: E1213 14:09:56.983569 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:56.985752 env[1219]: time="2024-12-13T14:09:56.985710756Z" level=info msg="CreateContainer within sandbox \"50bb827847bac36bc2c4f4c3cb7d08fa8a921b440d0845b73866bb9ac05525b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:09:57.071857 systemd[1]: Started sshd@6-10.0.0.75:22-10.0.0.1:48934.service. Dec 13 14:09:57.108686 sshd[3332]: Accepted publickey for core from 10.0.0.1 port 48934 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:09:57.110130 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:09:57.113605 systemd-logind[1208]: New session 7 of user core. Dec 13 14:09:57.114459 systemd[1]: Started session-7.scope. Dec 13 14:09:57.197839 env[1219]: time="2024-12-13T14:09:57.197780352Z" level=info msg="CreateContainer within sandbox \"22648c9aca6097aada61ee6f23dbe9edef94ab57a0d22c3458df0e47cf6bec2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3688085e2a7abbc211ec5b847623ec75f300bf01eecd69ebb1c008656fdf4b11\"" Dec 13 14:09:57.198474 env[1219]: time="2024-12-13T14:09:57.198418512Z" level=info msg="StartContainer for \"3688085e2a7abbc211ec5b847623ec75f300bf01eecd69ebb1c008656fdf4b11\"" Dec 13 14:09:57.201319 env[1219]: time="2024-12-13T14:09:57.201284634Z" level=info msg="CreateContainer within sandbox \"50bb827847bac36bc2c4f4c3cb7d08fa8a921b440d0845b73866bb9ac05525b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d7859306e46aff02971800c235d610ca1ee3cda186cc8ac0c9621e35f9c8ae1\"" Dec 13 14:09:57.201871 env[1219]: time="2024-12-13T14:09:57.201803914Z" level=info msg="StartContainer for \"5d7859306e46aff02971800c235d610ca1ee3cda186cc8ac0c9621e35f9c8ae1\"" Dec 13 14:09:57.221142 systemd[1]: Started cri-containerd-3688085e2a7abbc211ec5b847623ec75f300bf01eecd69ebb1c008656fdf4b11.scope. Dec 13 14:09:57.241077 systemd[1]: Started cri-containerd-5d7859306e46aff02971800c235d610ca1ee3cda186cc8ac0c9621e35f9c8ae1.scope. Dec 13 14:09:57.243593 sshd[3332]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:57.246980 systemd[1]: sshd@6-10.0.0.75:22-10.0.0.1:48934.service: Deactivated successfully. Dec 13 14:09:57.247660 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:09:57.248328 systemd-logind[1208]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:09:57.249155 systemd-logind[1208]: Removed session 7. Dec 13 14:09:57.270802 env[1219]: time="2024-12-13T14:09:57.270757738Z" level=info msg="StartContainer for \"3688085e2a7abbc211ec5b847623ec75f300bf01eecd69ebb1c008656fdf4b11\" returns successfully" Dec 13 14:09:57.277495 env[1219]: time="2024-12-13T14:09:57.277452021Z" level=info msg="StartContainer for \"5d7859306e46aff02971800c235d610ca1ee3cda186cc8ac0c9621e35f9c8ae1\" returns successfully" Dec 13 14:09:57.717207 kubelet[2030]: E1213 14:09:57.716904 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:57.718148 kubelet[2030]: E1213 14:09:57.718111 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:57.728330 kubelet[2030]: I1213 14:09:57.728269 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qcdls" podStartSLOduration=19.728254423 podStartE2EDuration="19.728254423s" podCreationTimestamp="2024-12-13 14:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:09:57.728169343 +0000 UTC m=+33.211203720" watchObservedRunningTime="2024-12-13 14:09:57.728254423 +0000 UTC m=+33.211288800" Dec 13 14:09:57.892357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533491892.mount: Deactivated successfully. Dec 13 14:09:58.719936 kubelet[2030]: E1213 14:09:58.719897 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:58.720663 kubelet[2030]: E1213 14:09:58.720644 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:59.721703 kubelet[2030]: E1213 14:09:59.721673 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:09:59.722056 kubelet[2030]: E1213 14:09:59.721791 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:02.248706 systemd[1]: Started sshd@7-10.0.0.75:22-10.0.0.1:48936.service. Dec 13 14:10:02.281383 sshd[3432]: Accepted publickey for core from 10.0.0.1 port 48936 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:02.283271 sshd[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:02.286841 systemd-logind[1208]: New session 8 of user core. Dec 13 14:10:02.287537 systemd[1]: Started session-8.scope. Dec 13 14:10:02.398687 sshd[3432]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:02.401198 systemd[1]: sshd@7-10.0.0.75:22-10.0.0.1:48936.service: Deactivated successfully. Dec 13 14:10:02.401936 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:10:02.402516 systemd-logind[1208]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:10:02.403250 systemd-logind[1208]: Removed session 8. Dec 13 14:10:07.403225 systemd[1]: Started sshd@8-10.0.0.75:22-10.0.0.1:36102.service. Dec 13 14:10:07.444256 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 36102 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:07.445427 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:07.449938 systemd[1]: Started session-9.scope. Dec 13 14:10:07.450401 systemd-logind[1208]: New session 9 of user core. Dec 13 14:10:07.585076 sshd[3446]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:07.588688 systemd[1]: Started sshd@9-10.0.0.75:22-10.0.0.1:36106.service. Dec 13 14:10:07.589142 systemd[1]: sshd@8-10.0.0.75:22-10.0.0.1:36102.service: Deactivated successfully. Dec 13 14:10:07.589996 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:10:07.590568 systemd-logind[1208]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:10:07.591547 systemd-logind[1208]: Removed session 9. Dec 13 14:10:07.623359 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 36106 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:07.624673 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:07.627915 systemd-logind[1208]: New session 10 of user core. Dec 13 14:10:07.628767 systemd[1]: Started session-10.scope. Dec 13 14:10:07.775576 sshd[3459]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:07.779916 systemd[1]: Started sshd@10-10.0.0.75:22-10.0.0.1:36120.service. Dec 13 14:10:07.781908 systemd[1]: sshd@9-10.0.0.75:22-10.0.0.1:36106.service: Deactivated successfully. Dec 13 14:10:07.782557 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:10:07.785222 systemd-logind[1208]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:10:07.786316 systemd-logind[1208]: Removed session 10. Dec 13 14:10:07.815733 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 36120 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:07.817258 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:07.821750 systemd-logind[1208]: New session 11 of user core. Dec 13 14:10:07.822187 systemd[1]: Started session-11.scope. Dec 13 14:10:07.932938 sshd[3471]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:07.935534 systemd[1]: sshd@10-10.0.0.75:22-10.0.0.1:36120.service: Deactivated successfully. Dec 13 14:10:07.936201 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:10:07.936824 systemd-logind[1208]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:10:07.937484 systemd-logind[1208]: Removed session 11. Dec 13 14:10:12.939141 systemd[1]: Started sshd@11-10.0.0.75:22-10.0.0.1:44734.service. Dec 13 14:10:12.973322 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 44734 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:12.974673 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:12.977993 systemd-logind[1208]: New session 12 of user core. Dec 13 14:10:12.978852 systemd[1]: Started session-12.scope. Dec 13 14:10:13.090680 sshd[3488]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:13.093119 systemd[1]: sshd@11-10.0.0.75:22-10.0.0.1:44734.service: Deactivated successfully. Dec 13 14:10:13.093839 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:10:13.094336 systemd-logind[1208]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:10:13.094988 systemd-logind[1208]: Removed session 12. Dec 13 14:10:18.095160 systemd[1]: Started sshd@12-10.0.0.75:22-10.0.0.1:44740.service. Dec 13 14:10:18.128518 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 44740 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:18.129772 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:18.132974 systemd-logind[1208]: New session 13 of user core. Dec 13 14:10:18.133845 systemd[1]: Started session-13.scope. Dec 13 14:10:18.244831 sshd[3501]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:18.248524 systemd[1]: Started sshd@13-10.0.0.75:22-10.0.0.1:44744.service. Dec 13 14:10:18.249039 systemd[1]: sshd@12-10.0.0.75:22-10.0.0.1:44740.service: Deactivated successfully. Dec 13 14:10:18.249697 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:10:18.250244 systemd-logind[1208]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:10:18.251276 systemd-logind[1208]: Removed session 13. Dec 13 14:10:18.284598 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 44744 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:18.285869 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:18.289105 systemd-logind[1208]: New session 14 of user core. Dec 13 14:10:18.289952 systemd[1]: Started session-14.scope. Dec 13 14:10:18.469864 sshd[3513]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:18.473211 systemd[1]: Started sshd@14-10.0.0.75:22-10.0.0.1:44758.service. Dec 13 14:10:18.473707 systemd[1]: sshd@13-10.0.0.75:22-10.0.0.1:44744.service: Deactivated successfully. Dec 13 14:10:18.474518 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:10:18.475125 systemd-logind[1208]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:10:18.476002 systemd-logind[1208]: Removed session 14. Dec 13 14:10:18.505169 sshd[3524]: Accepted publickey for core from 10.0.0.1 port 44758 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:18.506363 sshd[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:18.510385 systemd[1]: Started session-15.scope. Dec 13 14:10:18.510465 systemd-logind[1208]: New session 15 of user core. Dec 13 14:10:19.802982 sshd[3524]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:19.805689 systemd[1]: sshd@14-10.0.0.75:22-10.0.0.1:44758.service: Deactivated successfully. Dec 13 14:10:19.806274 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:10:19.806871 systemd-logind[1208]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:10:19.807895 systemd[1]: Started sshd@15-10.0.0.75:22-10.0.0.1:44760.service. Dec 13 14:10:19.808608 systemd-logind[1208]: Removed session 15. Dec 13 14:10:19.849761 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 44760 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:19.850558 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:19.854027 systemd-logind[1208]: New session 16 of user core. Dec 13 14:10:19.854865 systemd[1]: Started session-16.scope. Dec 13 14:10:20.069107 sshd[3542]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:20.072710 systemd[1]: sshd@15-10.0.0.75:22-10.0.0.1:44760.service: Deactivated successfully. Dec 13 14:10:20.073297 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:10:20.074862 systemd-logind[1208]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:10:20.076052 systemd[1]: Started sshd@16-10.0.0.75:22-10.0.0.1:44772.service. Dec 13 14:10:20.077885 systemd-logind[1208]: Removed session 16. Dec 13 14:10:20.109552 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 44772 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:20.111035 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:20.115558 systemd-logind[1208]: New session 17 of user core. Dec 13 14:10:20.115987 systemd[1]: Started session-17.scope. Dec 13 14:10:20.230031 sshd[3556]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:20.233852 systemd[1]: sshd@16-10.0.0.75:22-10.0.0.1:44772.service: Deactivated successfully. Dec 13 14:10:20.234606 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:10:20.235052 systemd-logind[1208]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:10:20.235719 systemd-logind[1208]: Removed session 17. Dec 13 14:10:25.234454 systemd[1]: Started sshd@17-10.0.0.75:22-10.0.0.1:35630.service. Dec 13 14:10:25.269969 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 35630 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:25.271600 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:25.275479 systemd-logind[1208]: New session 18 of user core. Dec 13 14:10:25.275770 systemd[1]: Started session-18.scope. Dec 13 14:10:25.395082 sshd[3575]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:25.398475 systemd[1]: sshd@17-10.0.0.75:22-10.0.0.1:35630.service: Deactivated successfully. Dec 13 14:10:25.399128 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:10:25.399805 systemd-logind[1208]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:10:25.400893 systemd-logind[1208]: Removed session 18. Dec 13 14:10:30.399241 systemd[1]: Started sshd@18-10.0.0.75:22-10.0.0.1:35632.service. Dec 13 14:10:30.432701 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 35632 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:30.433848 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:30.437419 systemd-logind[1208]: New session 19 of user core. Dec 13 14:10:30.438130 systemd[1]: Started session-19.scope. Dec 13 14:10:30.553684 sshd[3589]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:30.556298 systemd[1]: sshd@18-10.0.0.75:22-10.0.0.1:35632.service: Deactivated successfully. Dec 13 14:10:30.557031 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:10:30.557582 systemd-logind[1208]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:10:30.558230 systemd-logind[1208]: Removed session 19. Dec 13 14:10:35.558300 systemd[1]: Started sshd@19-10.0.0.75:22-10.0.0.1:47732.service. Dec 13 14:10:35.591377 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 47732 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:35.592536 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:35.595700 systemd-logind[1208]: New session 20 of user core. Dec 13 14:10:35.596479 systemd[1]: Started session-20.scope. Dec 13 14:10:35.708047 sshd[3602]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:35.711663 systemd[1]: Started sshd@20-10.0.0.75:22-10.0.0.1:47744.service. Dec 13 14:10:35.712228 systemd[1]: sshd@19-10.0.0.75:22-10.0.0.1:47732.service: Deactivated successfully. Dec 13 14:10:35.712868 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:10:35.713332 systemd-logind[1208]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:10:35.714093 systemd-logind[1208]: Removed session 20. Dec 13 14:10:35.745559 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 47744 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:35.746683 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:35.749614 systemd-logind[1208]: New session 21 of user core. Dec 13 14:10:35.750340 systemd[1]: Started session-21.scope. Dec 13 14:10:38.534893 kubelet[2030]: I1213 14:10:38.534833 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jhj4k" podStartSLOduration=60.53481913 podStartE2EDuration="1m0.53481913s" podCreationTimestamp="2024-12-13 14:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:09:57.787012324 +0000 UTC m=+33.270046701" watchObservedRunningTime="2024-12-13 14:10:38.53481913 +0000 UTC m=+74.017853507" Dec 13 14:10:38.538070 env[1219]: time="2024-12-13T14:10:38.538030988Z" level=info msg="StopContainer for \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\" with timeout 30 (s)" Dec 13 14:10:38.538350 env[1219]: time="2024-12-13T14:10:38.538328470Z" level=info msg="Stop container \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\" with signal terminated" Dec 13 14:10:38.554709 systemd[1]: cri-containerd-23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea.scope: Deactivated successfully. Dec 13 14:10:38.556732 systemd[1]: run-containerd-runc-k8s.io-f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4-runc.9KOGsm.mount: Deactivated successfully. Dec 13 14:10:38.579116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea-rootfs.mount: Deactivated successfully. Dec 13 14:10:38.593687 env[1219]: time="2024-12-13T14:10:38.593622184Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:10:38.595444 env[1219]: time="2024-12-13T14:10:38.595327714Z" level=info msg="shim disconnected" id=23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea Dec 13 14:10:38.595444 env[1219]: time="2024-12-13T14:10:38.595358194Z" level=warning msg="cleaning up after shim disconnected" id=23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea namespace=k8s.io Dec 13 14:10:38.595444 env[1219]: time="2024-12-13T14:10:38.595369034Z" level=info msg="cleaning up dead shim" Dec 13 14:10:38.599824 env[1219]: time="2024-12-13T14:10:38.599795779Z" level=info msg="StopContainer for \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\" with timeout 2 (s)" Dec 13 14:10:38.600070 env[1219]: time="2024-12-13T14:10:38.600049741Z" level=info msg="Stop container \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\" with signal terminated" Dec 13 14:10:38.604757 systemd-networkd[1047]: lxc_health: Link DOWN Dec 13 14:10:38.604763 systemd-networkd[1047]: lxc_health: Lost carrier Dec 13 14:10:38.606343 env[1219]: time="2024-12-13T14:10:38.606294296Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3663 runtime=io.containerd.runc.v2\n" Dec 13 14:10:38.608154 env[1219]: time="2024-12-13T14:10:38.608115227Z" level=info msg="StopContainer for \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\" returns successfully" Dec 13 14:10:38.608698 env[1219]: time="2024-12-13T14:10:38.608658830Z" level=info msg="StopPodSandbox for \"35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2\"" Dec 13 14:10:38.608755 env[1219]: time="2024-12-13T14:10:38.608721430Z" level=info msg="Container to stop \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:38.610321 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2-shm.mount: Deactivated successfully. Dec 13 14:10:38.616977 systemd[1]: cri-containerd-35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2.scope: Deactivated successfully. Dec 13 14:10:38.640958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2-rootfs.mount: Deactivated successfully. Dec 13 14:10:38.641606 systemd[1]: cri-containerd-f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4.scope: Deactivated successfully. Dec 13 14:10:38.641899 systemd[1]: cri-containerd-f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4.scope: Consumed 6.384s CPU time. Dec 13 14:10:38.656014 env[1219]: time="2024-12-13T14:10:38.655958619Z" level=info msg="shim disconnected" id=35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2 Dec 13 14:10:38.656014 env[1219]: time="2024-12-13T14:10:38.656014299Z" level=warning msg="cleaning up after shim disconnected" id=35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2 namespace=k8s.io Dec 13 14:10:38.656201 env[1219]: time="2024-12-13T14:10:38.656023419Z" level=info msg="cleaning up dead shim" Dec 13 14:10:38.660340 env[1219]: time="2024-12-13T14:10:38.660293363Z" level=info msg="shim disconnected" id=f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4 Dec 13 14:10:38.660340 env[1219]: time="2024-12-13T14:10:38.660341004Z" level=warning msg="cleaning up after shim disconnected" id=f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4 namespace=k8s.io Dec 13 14:10:38.660539 env[1219]: time="2024-12-13T14:10:38.660351964Z" level=info msg="cleaning up dead shim" Dec 13 14:10:38.663985 env[1219]: time="2024-12-13T14:10:38.663950304Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3715 runtime=io.containerd.runc.v2\n" Dec 13 14:10:38.664261 env[1219]: time="2024-12-13T14:10:38.664235266Z" level=info msg="TearDown network for sandbox \"35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2\" successfully" Dec 13 14:10:38.664298 env[1219]: time="2024-12-13T14:10:38.664260506Z" level=info msg="StopPodSandbox for \"35a0b9112460dcc12689fe5df558a5147fb1c7aae59597579d5c2d0449eac4b2\" returns successfully" Dec 13 14:10:38.670011 env[1219]: time="2024-12-13T14:10:38.669960018Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3723 runtime=io.containerd.runc.v2\n" Dec 13 14:10:38.672045 env[1219]: time="2024-12-13T14:10:38.672016030Z" level=info msg="StopContainer for \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\" returns successfully" Dec 13 14:10:38.672413 env[1219]: time="2024-12-13T14:10:38.672373512Z" level=info msg="StopPodSandbox for \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\"" Dec 13 14:10:38.672491 env[1219]: time="2024-12-13T14:10:38.672469753Z" level=info msg="Container to stop \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:38.672528 env[1219]: time="2024-12-13T14:10:38.672490273Z" level=info msg="Container to stop \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:38.672528 env[1219]: time="2024-12-13T14:10:38.672503473Z" level=info msg="Container to stop \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:38.672578 env[1219]: time="2024-12-13T14:10:38.672527113Z" level=info msg="Container to stop \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:38.672578 env[1219]: time="2024-12-13T14:10:38.672538593Z" level=info msg="Container to stop \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:38.677156 systemd[1]: cri-containerd-3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5.scope: Deactivated successfully. Dec 13 14:10:38.709022 env[1219]: time="2024-12-13T14:10:38.708957160Z" level=info msg="shim disconnected" id=3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5 Dec 13 14:10:38.709022 env[1219]: time="2024-12-13T14:10:38.709008720Z" level=warning msg="cleaning up after shim disconnected" id=3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5 namespace=k8s.io Dec 13 14:10:38.709022 env[1219]: time="2024-12-13T14:10:38.709018560Z" level=info msg="cleaning up dead shim" Dec 13 14:10:38.715416 env[1219]: time="2024-12-13T14:10:38.715263676Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3757 runtime=io.containerd.runc.v2\n" Dec 13 14:10:38.715575 env[1219]: time="2024-12-13T14:10:38.715547957Z" level=info msg="TearDown network for sandbox \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" successfully" Dec 13 14:10:38.715608 env[1219]: time="2024-12-13T14:10:38.715593398Z" level=info msg="StopPodSandbox for \"3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5\" returns successfully" Dec 13 14:10:38.739925 kubelet[2030]: I1213 14:10:38.739883 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h86tp\" (UniqueName: \"kubernetes.io/projected/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b-kube-api-access-h86tp\") pod \"dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b\" (UID: \"dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b\") " Dec 13 14:10:38.740156 kubelet[2030]: I1213 14:10:38.739935 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b-cilium-config-path\") pod \"dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b\" (UID: \"dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b\") " Dec 13 14:10:38.744266 kubelet[2030]: I1213 14:10:38.744229 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b" (UID: "dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:10:38.745785 kubelet[2030]: I1213 14:10:38.745751 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b-kube-api-access-h86tp" (OuterVolumeSpecName: "kube-api-access-h86tp") pod "dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b" (UID: "dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b"). InnerVolumeSpecName "kube-api-access-h86tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:10:38.787427 kubelet[2030]: I1213 14:10:38.787332 2030 scope.go:117] "RemoveContainer" containerID="f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4" Dec 13 14:10:38.790521 env[1219]: time="2024-12-13T14:10:38.789161336Z" level=info msg="RemoveContainer for \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\"" Dec 13 14:10:38.792983 systemd[1]: Removed slice kubepods-besteffort-poddbbb2e72_4a59_4b3a_b5cb_1dce4c5d701b.slice. Dec 13 14:10:38.794221 env[1219]: time="2024-12-13T14:10:38.794187844Z" level=info msg="RemoveContainer for \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\" returns successfully" Dec 13 14:10:38.794451 kubelet[2030]: I1213 14:10:38.794430 2030 scope.go:117] "RemoveContainer" containerID="b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9" Dec 13 14:10:38.795983 env[1219]: time="2024-12-13T14:10:38.795952374Z" level=info msg="RemoveContainer for \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\"" Dec 13 14:10:38.799398 env[1219]: time="2024-12-13T14:10:38.799355474Z" level=info msg="RemoveContainer for \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\" returns successfully" Dec 13 14:10:38.799586 kubelet[2030]: I1213 14:10:38.799570 2030 scope.go:117] "RemoveContainer" containerID="acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234" Dec 13 14:10:38.801466 env[1219]: time="2024-12-13T14:10:38.801434286Z" level=info msg="RemoveContainer for \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\"" Dec 13 14:10:38.803863 env[1219]: time="2024-12-13T14:10:38.803835059Z" level=info msg="RemoveContainer for \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\" returns successfully" Dec 13 14:10:38.804011 kubelet[2030]: I1213 14:10:38.803993 2030 scope.go:117] "RemoveContainer" containerID="f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6" Dec 13 14:10:38.805169 env[1219]: time="2024-12-13T14:10:38.805133987Z" level=info msg="RemoveContainer for \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\"" Dec 13 14:10:38.807012 env[1219]: time="2024-12-13T14:10:38.806981557Z" level=info msg="RemoveContainer for \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\" returns successfully" Dec 13 14:10:38.807208 kubelet[2030]: I1213 14:10:38.807178 2030 scope.go:117] "RemoveContainer" containerID="d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232" Dec 13 14:10:38.808154 env[1219]: time="2024-12-13T14:10:38.808132524Z" level=info msg="RemoveContainer for \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\"" Dec 13 14:10:38.810231 env[1219]: time="2024-12-13T14:10:38.810207615Z" level=info msg="RemoveContainer for \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\" returns successfully" Dec 13 14:10:38.810410 kubelet[2030]: I1213 14:10:38.810384 2030 scope.go:117] "RemoveContainer" containerID="f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4" Dec 13 14:10:38.810683 env[1219]: time="2024-12-13T14:10:38.810623738Z" level=error msg="ContainerStatus for \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\": not found" Dec 13 14:10:38.811642 kubelet[2030]: E1213 14:10:38.811614 2030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\": not found" containerID="f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4" Dec 13 14:10:38.811814 kubelet[2030]: I1213 14:10:38.811735 2030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4"} err="failed to get container status \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\": rpc error: code = NotFound desc = an error occurred when try to find container \"f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4\": not found" Dec 13 14:10:38.811891 kubelet[2030]: I1213 14:10:38.811880 2030 scope.go:117] "RemoveContainer" containerID="b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9" Dec 13 14:10:38.812131 env[1219]: time="2024-12-13T14:10:38.812093666Z" level=error msg="ContainerStatus for \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\": not found" Dec 13 14:10:38.812228 kubelet[2030]: E1213 14:10:38.812210 2030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\": not found" containerID="b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9" Dec 13 14:10:38.812284 kubelet[2030]: I1213 14:10:38.812234 2030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9"} err="failed to get container status \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b88d33cb268dcc791fb7c3a4e477bd779d7366234cef081bc330f81d0263ead9\": not found" Dec 13 14:10:38.812284 kubelet[2030]: I1213 14:10:38.812259 2030 scope.go:117] "RemoveContainer" containerID="acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234" Dec 13 14:10:38.812496 env[1219]: time="2024-12-13T14:10:38.812441988Z" level=error msg="ContainerStatus for \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\": not found" Dec 13 14:10:38.812644 kubelet[2030]: E1213 14:10:38.812621 2030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\": not found" containerID="acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234" Dec 13 14:10:38.812735 kubelet[2030]: I1213 14:10:38.812716 2030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234"} err="failed to get container status \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\": rpc error: code = NotFound desc = an error occurred when try to find container \"acabe7b314c6b90925125ae09f400f92dd67684fb282383acafacddc5f84d234\": not found" Dec 13 14:10:38.812800 kubelet[2030]: I1213 14:10:38.812790 2030 scope.go:117] "RemoveContainer" containerID="f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6" Dec 13 14:10:38.813062 env[1219]: time="2024-12-13T14:10:38.813013911Z" level=error msg="ContainerStatus for \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\": not found" Dec 13 14:10:38.813171 kubelet[2030]: E1213 14:10:38.813153 2030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\": not found" containerID="f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6" Dec 13 14:10:38.813216 kubelet[2030]: I1213 14:10:38.813175 2030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6"} err="failed to get container status \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\": rpc error: code = NotFound desc = an error occurred when try to find container \"f98924e745f2cdd5e78fffae40e4ac0622c3c9e250461874ffd75d075ca9cca6\": not found" Dec 13 14:10:38.813216 kubelet[2030]: I1213 14:10:38.813191 2030 scope.go:117] "RemoveContainer" containerID="d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232" Dec 13 14:10:38.813449 env[1219]: time="2024-12-13T14:10:38.813355393Z" level=error msg="ContainerStatus for \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\": not found" Dec 13 14:10:38.813597 kubelet[2030]: E1213 14:10:38.813578 2030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\": not found" containerID="d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232" Dec 13 14:10:38.813643 kubelet[2030]: I1213 14:10:38.813600 2030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232"} err="failed to get container status \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4b8d6fc6f27913e785b7cce767d454e73b132271f22b98e390836db3f301232\": not found" Dec 13 14:10:38.813643 kubelet[2030]: I1213 14:10:38.813624 2030 scope.go:117] "RemoveContainer" containerID="23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea" Dec 13 14:10:38.814642 env[1219]: time="2024-12-13T14:10:38.814620281Z" level=info msg="RemoveContainer for \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\"" Dec 13 14:10:38.817091 env[1219]: time="2024-12-13T14:10:38.817060574Z" level=info msg="RemoveContainer for \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\" returns successfully" Dec 13 14:10:38.817328 kubelet[2030]: I1213 14:10:38.817305 2030 scope.go:117] "RemoveContainer" containerID="23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea" Dec 13 14:10:38.817674 env[1219]: time="2024-12-13T14:10:38.817619898Z" level=error msg="ContainerStatus for \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\": not found" Dec 13 14:10:38.817781 kubelet[2030]: E1213 14:10:38.817763 2030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\": not found" containerID="23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea" Dec 13 14:10:38.817828 kubelet[2030]: I1213 14:10:38.817796 2030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea"} err="failed to get container status \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\": rpc error: code = NotFound desc = an error occurred when try to find container \"23b522c229b54b0b3012df82d6cf62c9b69685f4cffaac2d0de058c8bcf2ceea\": not found" Dec 13 14:10:38.840274 kubelet[2030]: I1213 14:10:38.840253 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae133714-0435-4788-b05d-6f6c02453ab1-hubble-tls\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.840651 kubelet[2030]: I1213 14:10:38.840635 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-host-proc-sys-kernel\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.840785 kubelet[2030]: I1213 14:10:38.840731 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.840837 kubelet[2030]: I1213 14:10:38.840767 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-hostproc\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.840837 kubelet[2030]: I1213 14:10:38.840829 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-cgroup\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.840895 kubelet[2030]: I1213 14:10:38.840851 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-config-path\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.840895 kubelet[2030]: I1213 14:10:38.840868 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-run\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.840948 kubelet[2030]: I1213 14:10:38.840896 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.841011 kubelet[2030]: I1213 14:10:38.840995 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-hostproc" (OuterVolumeSpecName: "hostproc") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.841103 kubelet[2030]: I1213 14:10:38.841071 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.842715 kubelet[2030]: I1213 14:10:38.842684 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:10:38.842768 kubelet[2030]: I1213 14:10:38.842746 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmfdg\" (UniqueName: \"kubernetes.io/projected/ae133714-0435-4788-b05d-6f6c02453ab1-kube-api-access-dmfdg\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.842798 kubelet[2030]: I1213 14:10:38.842768 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-lib-modules\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.842798 kubelet[2030]: I1213 14:10:38.842784 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cni-path\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.842926 kubelet[2030]: I1213 14:10:38.842871 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.842972 kubelet[2030]: I1213 14:10:38.842949 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae133714-0435-4788-b05d-6f6c02453ab1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:10:38.842997 kubelet[2030]: I1213 14:10:38.842975 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cni-path" (OuterVolumeSpecName: "cni-path") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.843052 kubelet[2030]: I1213 14:10:38.843036 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-host-proc-sys-net\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.843085 kubelet[2030]: I1213 14:10:38.843051 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.843085 kubelet[2030]: I1213 14:10:38.843058 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-bpf-maps\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.843133 kubelet[2030]: I1213 14:10:38.843107 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-etc-cni-netd\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.843133 kubelet[2030]: I1213 14:10:38.843122 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.843180 kubelet[2030]: I1213 14:10:38.843139 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.843180 kubelet[2030]: I1213 14:10:38.843152 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-xtables-lock\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.843180 kubelet[2030]: I1213 14:10:38.843175 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae133714-0435-4788-b05d-6f6c02453ab1-clustermesh-secrets\") pod \"ae133714-0435-4788-b05d-6f6c02453ab1\" (UID: \"ae133714-0435-4788-b05d-6f6c02453ab1\") " Dec 13 14:10:38.843246 kubelet[2030]: I1213 14:10:38.843198 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:38.843246 kubelet[2030]: I1213 14:10:38.843212 2030 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843246 kubelet[2030]: I1213 14:10:38.843224 2030 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843246 kubelet[2030]: I1213 14:10:38.843233 2030 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843246 kubelet[2030]: I1213 14:10:38.843240 2030 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843246 kubelet[2030]: I1213 14:10:38.843248 2030 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843445 kubelet[2030]: I1213 14:10:38.843256 2030 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843445 kubelet[2030]: I1213 14:10:38.843265 2030 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h86tp\" (UniqueName: \"kubernetes.io/projected/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b-kube-api-access-h86tp\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843445 kubelet[2030]: I1213 14:10:38.843272 2030 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae133714-0435-4788-b05d-6f6c02453ab1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843445 kubelet[2030]: I1213 14:10:38.843280 2030 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843445 kubelet[2030]: I1213 14:10:38.843289 2030 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843445 kubelet[2030]: I1213 14:10:38.843296 2030 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843445 kubelet[2030]: I1213 14:10:38.843304 2030 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.843445 kubelet[2030]: I1213 14:10:38.843320 2030 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.845105 kubelet[2030]: I1213 14:10:38.845068 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae133714-0435-4788-b05d-6f6c02453ab1-kube-api-access-dmfdg" (OuterVolumeSpecName: "kube-api-access-dmfdg") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "kube-api-access-dmfdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:10:38.845664 kubelet[2030]: I1213 14:10:38.845639 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae133714-0435-4788-b05d-6f6c02453ab1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ae133714-0435-4788-b05d-6f6c02453ab1" (UID: "ae133714-0435-4788-b05d-6f6c02453ab1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:10:38.943970 kubelet[2030]: I1213 14:10:38.943933 2030 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dmfdg\" (UniqueName: \"kubernetes.io/projected/ae133714-0435-4788-b05d-6f6c02453ab1-kube-api-access-dmfdg\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.944042 kubelet[2030]: I1213 14:10:38.943982 2030 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae133714-0435-4788-b05d-6f6c02453ab1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:38.944042 kubelet[2030]: I1213 14:10:38.943992 2030 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae133714-0435-4788-b05d-6f6c02453ab1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:39.091518 systemd[1]: Removed slice kubepods-burstable-podae133714_0435_4788_b05d_6f6c02453ab1.slice. Dec 13 14:10:39.091610 systemd[1]: kubepods-burstable-podae133714_0435_4788_b05d_6f6c02453ab1.slice: Consumed 6.589s CPU time. Dec 13 14:10:39.552424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f44341c8593830a75b3d99afa57823ed5ca66deea192af87e6141b569c726ca4-rootfs.mount: Deactivated successfully. Dec 13 14:10:39.552513 systemd[1]: var-lib-kubelet-pods-dbbb2e72\x2d4a59\x2d4b3a\x2db5cb\x2d1dce4c5d701b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh86tp.mount: Deactivated successfully. Dec 13 14:10:39.552573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5-rootfs.mount: Deactivated successfully. Dec 13 14:10:39.552627 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e0be226a27a92a23113ca746a0d5776ce1856f5f203c56710e79b539d1623c5-shm.mount: Deactivated successfully. Dec 13 14:10:39.552677 systemd[1]: var-lib-kubelet-pods-ae133714\x2d0435\x2d4788\x2db05d\x2d6f6c02453ab1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmfdg.mount: Deactivated successfully. Dec 13 14:10:39.552727 systemd[1]: var-lib-kubelet-pods-ae133714\x2d0435\x2d4788\x2db05d\x2d6f6c02453ab1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:10:39.552775 systemd[1]: var-lib-kubelet-pods-ae133714\x2d0435\x2d4788\x2db05d\x2d6f6c02453ab1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:10:39.664116 kubelet[2030]: E1213 14:10:39.664071 2030 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:10:40.504135 sshd[3614]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:40.506909 systemd[1]: sshd@20-10.0.0.75:22-10.0.0.1:47744.service: Deactivated successfully. Dec 13 14:10:40.507493 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:10:40.507638 systemd[1]: session-21.scope: Consumed 2.111s CPU time. Dec 13 14:10:40.508061 systemd-logind[1208]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:10:40.509159 systemd[1]: Started sshd@21-10.0.0.75:22-10.0.0.1:47748.service. Dec 13 14:10:40.509887 systemd-logind[1208]: Removed session 21. Dec 13 14:10:40.541804 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 47748 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:40.543131 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:40.546561 systemd-logind[1208]: New session 22 of user core. Dec 13 14:10:40.546905 systemd[1]: Started session-22.scope. Dec 13 14:10:40.612426 kubelet[2030]: I1213 14:10:40.612399 2030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae133714-0435-4788-b05d-6f6c02453ab1" path="/var/lib/kubelet/pods/ae133714-0435-4788-b05d-6f6c02453ab1/volumes" Dec 13 14:10:40.613053 kubelet[2030]: I1213 14:10:40.613033 2030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b" path="/var/lib/kubelet/pods/dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b/volumes" Dec 13 14:10:41.600842 sshd[3777]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:41.604201 systemd[1]: Started sshd@22-10.0.0.75:22-10.0.0.1:47762.service. Dec 13 14:10:41.605630 systemd[1]: sshd@21-10.0.0.75:22-10.0.0.1:47748.service: Deactivated successfully. Dec 13 14:10:41.606225 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:10:41.612049 kubelet[2030]: I1213 14:10:41.612007 2030 topology_manager.go:215] "Topology Admit Handler" podUID="0a2caabb-90ed-49cb-9e1a-096cb399c501" podNamespace="kube-system" podName="cilium-hfjqg" Dec 13 14:10:41.612332 kubelet[2030]: E1213 14:10:41.612133 2030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b" containerName="cilium-operator" Dec 13 14:10:41.612332 kubelet[2030]: E1213 14:10:41.612144 2030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae133714-0435-4788-b05d-6f6c02453ab1" containerName="cilium-agent" Dec 13 14:10:41.612332 kubelet[2030]: E1213 14:10:41.612152 2030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae133714-0435-4788-b05d-6f6c02453ab1" containerName="apply-sysctl-overwrites" Dec 13 14:10:41.612332 kubelet[2030]: E1213 14:10:41.612158 2030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae133714-0435-4788-b05d-6f6c02453ab1" containerName="mount-bpf-fs" Dec 13 14:10:41.612332 kubelet[2030]: E1213 14:10:41.612164 2030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae133714-0435-4788-b05d-6f6c02453ab1" containerName="mount-cgroup" Dec 13 14:10:41.612332 kubelet[2030]: E1213 14:10:41.612170 2030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae133714-0435-4788-b05d-6f6c02453ab1" containerName="clean-cilium-state" Dec 13 14:10:41.612332 kubelet[2030]: I1213 14:10:41.612192 2030 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae133714-0435-4788-b05d-6f6c02453ab1" containerName="cilium-agent" Dec 13 14:10:41.612332 kubelet[2030]: I1213 14:10:41.612199 2030 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbb2e72-4a59-4b3a-b5cb-1dce4c5d701b" containerName="cilium-operator" Dec 13 14:10:41.617381 systemd-logind[1208]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:10:41.618798 systemd-logind[1208]: Removed session 22. Dec 13 14:10:41.633244 systemd[1]: Created slice kubepods-burstable-pod0a2caabb_90ed_49cb_9e1a_096cb399c501.slice. Dec 13 14:10:41.645094 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 47762 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:41.646327 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:41.651818 systemd[1]: Started session-23.scope. Dec 13 14:10:41.652113 systemd-logind[1208]: New session 23 of user core. Dec 13 14:10:41.756794 kubelet[2030]: I1213 14:10:41.756762 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-host-proc-sys-kernel\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.756947 kubelet[2030]: I1213 14:10:41.756933 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-cgroup\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757012 kubelet[2030]: I1213 14:10:41.757000 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a2caabb-90ed-49cb-9e1a-096cb399c501-clustermesh-secrets\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757098 kubelet[2030]: I1213 14:10:41.757086 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-config-path\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757221 kubelet[2030]: I1213 14:10:41.757207 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-ipsec-secrets\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757291 kubelet[2030]: I1213 14:10:41.757279 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-bpf-maps\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757375 kubelet[2030]: I1213 14:10:41.757363 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-xtables-lock\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757472 kubelet[2030]: I1213 14:10:41.757459 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-etc-cni-netd\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757548 kubelet[2030]: I1213 14:10:41.757536 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-host-proc-sys-net\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757611 kubelet[2030]: I1213 14:10:41.757599 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfzq\" (UniqueName: \"kubernetes.io/projected/0a2caabb-90ed-49cb-9e1a-096cb399c501-kube-api-access-mmfzq\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757675 kubelet[2030]: I1213 14:10:41.757663 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cni-path\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757741 kubelet[2030]: I1213 14:10:41.757726 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-lib-modules\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757811 kubelet[2030]: I1213 14:10:41.757799 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a2caabb-90ed-49cb-9e1a-096cb399c501-hubble-tls\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757882 kubelet[2030]: I1213 14:10:41.757870 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-run\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.757950 kubelet[2030]: I1213 14:10:41.757938 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-hostproc\") pod \"cilium-hfjqg\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " pod="kube-system/cilium-hfjqg" Dec 13 14:10:41.769878 sshd[3788]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:41.773205 systemd[1]: Started sshd@23-10.0.0.75:22-10.0.0.1:47768.service. Dec 13 14:10:41.778967 systemd[1]: sshd@22-10.0.0.75:22-10.0.0.1:47762.service: Deactivated successfully. Dec 13 14:10:41.780071 kubelet[2030]: E1213 14:10:41.779731 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-mmfzq lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-hfjqg" podUID="0a2caabb-90ed-49cb-9e1a-096cb399c501" Dec 13 14:10:41.779621 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:10:41.780177 systemd-logind[1208]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:10:41.782745 systemd-logind[1208]: Removed session 23. Dec 13 14:10:41.819268 sshd[3801]: Accepted publickey for core from 10.0.0.1 port 47768 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:10:41.820607 sshd[3801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:41.824448 systemd-logind[1208]: New session 24 of user core. Dec 13 14:10:41.824454 systemd[1]: Started session-24.scope. Dec 13 14:10:41.959662 kubelet[2030]: I1213 14:10:41.959618 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-config-path\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959662 kubelet[2030]: I1213 14:10:41.959662 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-xtables-lock\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959849 kubelet[2030]: I1213 14:10:41.959684 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-cgroup\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959849 kubelet[2030]: I1213 14:10:41.959702 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-ipsec-secrets\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959849 kubelet[2030]: I1213 14:10:41.959720 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-bpf-maps\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959849 kubelet[2030]: I1213 14:10:41.959734 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-host-proc-sys-net\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959849 kubelet[2030]: I1213 14:10:41.959748 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-lib-modules\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959849 kubelet[2030]: I1213 14:10:41.959761 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-hostproc\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959979 kubelet[2030]: I1213 14:10:41.959774 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-host-proc-sys-kernel\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959979 kubelet[2030]: I1213 14:10:41.959787 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-etc-cni-netd\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959979 kubelet[2030]: I1213 14:10:41.959800 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cni-path\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959979 kubelet[2030]: I1213 14:10:41.959814 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-run\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959979 kubelet[2030]: I1213 14:10:41.959831 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmfzq\" (UniqueName: \"kubernetes.io/projected/0a2caabb-90ed-49cb-9e1a-096cb399c501-kube-api-access-mmfzq\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.959979 kubelet[2030]: I1213 14:10:41.959846 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a2caabb-90ed-49cb-9e1a-096cb399c501-hubble-tls\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.960107 kubelet[2030]: I1213 14:10:41.959863 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a2caabb-90ed-49cb-9e1a-096cb399c501-clustermesh-secrets\") pod \"0a2caabb-90ed-49cb-9e1a-096cb399c501\" (UID: \"0a2caabb-90ed-49cb-9e1a-096cb399c501\") " Dec 13 14:10:41.960223 kubelet[2030]: I1213 14:10:41.960178 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.960265 kubelet[2030]: I1213 14:10:41.960212 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cni-path" (OuterVolumeSpecName: "cni-path") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.960265 kubelet[2030]: I1213 14:10:41.960254 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-hostproc" (OuterVolumeSpecName: "hostproc") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.960313 kubelet[2030]: I1213 14:10:41.960269 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.960313 kubelet[2030]: I1213 14:10:41.960284 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.960523 kubelet[2030]: I1213 14:10:41.960485 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.961933 kubelet[2030]: I1213 14:10:41.961893 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:10:41.962003 kubelet[2030]: I1213 14:10:41.961940 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.964238 kubelet[2030]: I1213 14:10:41.962816 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:10:41.964238 kubelet[2030]: I1213 14:10:41.962873 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.964238 kubelet[2030]: I1213 14:10:41.962891 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.964238 kubelet[2030]: I1213 14:10:41.962909 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:41.964238 kubelet[2030]: I1213 14:10:41.962959 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2caabb-90ed-49cb-9e1a-096cb399c501-kube-api-access-mmfzq" (OuterVolumeSpecName: "kube-api-access-mmfzq") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "kube-api-access-mmfzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:10:41.963890 systemd[1]: var-lib-kubelet-pods-0a2caabb\x2d90ed\x2d49cb\x2d9e1a\x2d096cb399c501-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmfzq.mount: Deactivated successfully. Dec 13 14:10:41.964498 kubelet[2030]: I1213 14:10:41.962980 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2caabb-90ed-49cb-9e1a-096cb399c501-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:10:41.963986 systemd[1]: var-lib-kubelet-pods-0a2caabb\x2d90ed\x2d49cb\x2d9e1a\x2d096cb399c501-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:10:41.964036 systemd[1]: var-lib-kubelet-pods-0a2caabb\x2d90ed\x2d49cb\x2d9e1a\x2d096cb399c501-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:10:41.965294 kubelet[2030]: I1213 14:10:41.965262 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2caabb-90ed-49cb-9e1a-096cb399c501-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0a2caabb-90ed-49cb-9e1a-096cb399c501" (UID: "0a2caabb-90ed-49cb-9e1a-096cb399c501"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:10:42.060776 kubelet[2030]: I1213 14:10:42.060735 2030 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060776 kubelet[2030]: I1213 14:10:42.060766 2030 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060776 kubelet[2030]: I1213 14:10:42.060775 2030 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060776 kubelet[2030]: I1213 14:10:42.060783 2030 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060976 kubelet[2030]: I1213 14:10:42.060792 2030 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060976 kubelet[2030]: I1213 14:10:42.060800 2030 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060976 kubelet[2030]: I1213 14:10:42.060808 2030 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060976 kubelet[2030]: I1213 14:10:42.060815 2030 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060976 kubelet[2030]: I1213 14:10:42.060822 2030 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060976 kubelet[2030]: I1213 14:10:42.060829 2030 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060976 kubelet[2030]: I1213 14:10:42.060837 2030 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.060976 kubelet[2030]: I1213 14:10:42.060844 2030 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a2caabb-90ed-49cb-9e1a-096cb399c501-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.061136 kubelet[2030]: I1213 14:10:42.060851 2030 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mmfzq\" (UniqueName: \"kubernetes.io/projected/0a2caabb-90ed-49cb-9e1a-096cb399c501-kube-api-access-mmfzq\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.061136 kubelet[2030]: I1213 14:10:42.060861 2030 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a2caabb-90ed-49cb-9e1a-096cb399c501-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.061136 kubelet[2030]: I1213 14:10:42.060869 2030 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a2caabb-90ed-49cb-9e1a-096cb399c501-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:10:42.615519 systemd[1]: Removed slice kubepods-burstable-pod0a2caabb_90ed_49cb_9e1a_096cb399c501.slice. Dec 13 14:10:42.828082 kubelet[2030]: I1213 14:10:42.827979 2030 topology_manager.go:215] "Topology Admit Handler" podUID="7e567e20-28dd-4843-9b60-8c480b3e07be" podNamespace="kube-system" podName="cilium-vxdpl" Dec 13 14:10:42.836655 systemd[1]: Created slice kubepods-burstable-pod7e567e20_28dd_4843_9b60_8c480b3e07be.slice. Dec 13 14:10:42.862202 systemd[1]: var-lib-kubelet-pods-0a2caabb\x2d90ed\x2d49cb\x2d9e1a\x2d096cb399c501-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:10:42.966685 kubelet[2030]: I1213 14:10:42.966652 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-lib-modules\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966796 kubelet[2030]: I1213 14:10:42.966693 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmrqh\" (UniqueName: \"kubernetes.io/projected/7e567e20-28dd-4843-9b60-8c480b3e07be-kube-api-access-kmrqh\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966796 kubelet[2030]: I1213 14:10:42.966711 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-hostproc\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966796 kubelet[2030]: I1213 14:10:42.966727 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-cilium-cgroup\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966887 kubelet[2030]: I1213 14:10:42.966801 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-bpf-maps\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966887 kubelet[2030]: I1213 14:10:42.966836 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-cni-path\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966887 kubelet[2030]: I1213 14:10:42.966874 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7e567e20-28dd-4843-9b60-8c480b3e07be-cilium-ipsec-secrets\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966959 kubelet[2030]: I1213 14:10:42.966911 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e567e20-28dd-4843-9b60-8c480b3e07be-hubble-tls\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966959 kubelet[2030]: I1213 14:10:42.966931 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-host-proc-sys-net\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.966959 kubelet[2030]: I1213 14:10:42.966951 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-xtables-lock\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.967024 kubelet[2030]: I1213 14:10:42.966966 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e567e20-28dd-4843-9b60-8c480b3e07be-cilium-config-path\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.967024 kubelet[2030]: I1213 14:10:42.966988 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e567e20-28dd-4843-9b60-8c480b3e07be-clustermesh-secrets\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.967024 kubelet[2030]: I1213 14:10:42.967006 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-cilium-run\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.967024 kubelet[2030]: I1213 14:10:42.967022 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-etc-cni-netd\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:42.967107 kubelet[2030]: I1213 14:10:42.967041 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e567e20-28dd-4843-9b60-8c480b3e07be-host-proc-sys-kernel\") pod \"cilium-vxdpl\" (UID: \"7e567e20-28dd-4843-9b60-8c480b3e07be\") " pod="kube-system/cilium-vxdpl" Dec 13 14:10:43.138487 kubelet[2030]: E1213 14:10:43.138446 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:43.138956 env[1219]: time="2024-12-13T14:10:43.138918718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vxdpl,Uid:7e567e20-28dd-4843-9b60-8c480b3e07be,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:43.151838 env[1219]: time="2024-12-13T14:10:43.151776542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:43.151838 env[1219]: time="2024-12-13T14:10:43.151814862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:43.151838 env[1219]: time="2024-12-13T14:10:43.151825423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:43.152007 env[1219]: time="2024-12-13T14:10:43.151960143Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7 pid=3833 runtime=io.containerd.runc.v2 Dec 13 14:10:43.167581 systemd[1]: Started cri-containerd-ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7.scope. Dec 13 14:10:43.190628 env[1219]: time="2024-12-13T14:10:43.190592295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vxdpl,Uid:7e567e20-28dd-4843-9b60-8c480b3e07be,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\"" Dec 13 14:10:43.191258 kubelet[2030]: E1213 14:10:43.191226 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:43.194196 env[1219]: time="2024-12-13T14:10:43.194154033Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:10:43.203428 env[1219]: time="2024-12-13T14:10:43.203370758Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3c0a137e13bfab73d66f7fd8bb822ac6af345ea74539cf01131d42c5cf4f24f0\"" Dec 13 14:10:43.204875 env[1219]: time="2024-12-13T14:10:43.204842966Z" level=info msg="StartContainer for \"3c0a137e13bfab73d66f7fd8bb822ac6af345ea74539cf01131d42c5cf4f24f0\"" Dec 13 14:10:43.217800 systemd[1]: Started cri-containerd-3c0a137e13bfab73d66f7fd8bb822ac6af345ea74539cf01131d42c5cf4f24f0.scope. Dec 13 14:10:43.246068 env[1219]: time="2024-12-13T14:10:43.246032250Z" level=info msg="StartContainer for \"3c0a137e13bfab73d66f7fd8bb822ac6af345ea74539cf01131d42c5cf4f24f0\" returns successfully" Dec 13 14:10:43.254885 systemd[1]: cri-containerd-3c0a137e13bfab73d66f7fd8bb822ac6af345ea74539cf01131d42c5cf4f24f0.scope: Deactivated successfully. Dec 13 14:10:43.279624 env[1219]: time="2024-12-13T14:10:43.279583977Z" level=info msg="shim disconnected" id=3c0a137e13bfab73d66f7fd8bb822ac6af345ea74539cf01131d42c5cf4f24f0 Dec 13 14:10:43.279822 env[1219]: time="2024-12-13T14:10:43.279804778Z" level=warning msg="cleaning up after shim disconnected" id=3c0a137e13bfab73d66f7fd8bb822ac6af345ea74539cf01131d42c5cf4f24f0 namespace=k8s.io Dec 13 14:10:43.279898 env[1219]: time="2024-12-13T14:10:43.279884698Z" level=info msg="cleaning up dead shim" Dec 13 14:10:43.285650 env[1219]: time="2024-12-13T14:10:43.285623127Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3919 runtime=io.containerd.runc.v2\n" Dec 13 14:10:43.799811 kubelet[2030]: E1213 14:10:43.799781 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:43.801684 env[1219]: time="2024-12-13T14:10:43.801634089Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:10:43.811533 env[1219]: time="2024-12-13T14:10:43.811479498Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d40a75dd260f74940ffc88e57a62079a689ec7c1f4add1f8a4d90019c7bec6df\"" Dec 13 14:10:43.812093 env[1219]: time="2024-12-13T14:10:43.812059341Z" level=info msg="StartContainer for \"d40a75dd260f74940ffc88e57a62079a689ec7c1f4add1f8a4d90019c7bec6df\"" Dec 13 14:10:43.835719 systemd[1]: Started cri-containerd-d40a75dd260f74940ffc88e57a62079a689ec7c1f4add1f8a4d90019c7bec6df.scope. Dec 13 14:10:43.866050 env[1219]: time="2024-12-13T14:10:43.866007889Z" level=info msg="StartContainer for \"d40a75dd260f74940ffc88e57a62079a689ec7c1f4add1f8a4d90019c7bec6df\" returns successfully" Dec 13 14:10:43.873453 systemd[1]: cri-containerd-d40a75dd260f74940ffc88e57a62079a689ec7c1f4add1f8a4d90019c7bec6df.scope: Deactivated successfully. Dec 13 14:10:43.886307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d40a75dd260f74940ffc88e57a62079a689ec7c1f4add1f8a4d90019c7bec6df-rootfs.mount: Deactivated successfully. Dec 13 14:10:43.892619 env[1219]: time="2024-12-13T14:10:43.892580061Z" level=info msg="shim disconnected" id=d40a75dd260f74940ffc88e57a62079a689ec7c1f4add1f8a4d90019c7bec6df Dec 13 14:10:43.892772 env[1219]: time="2024-12-13T14:10:43.892621461Z" level=warning msg="cleaning up after shim disconnected" id=d40a75dd260f74940ffc88e57a62079a689ec7c1f4add1f8a4d90019c7bec6df namespace=k8s.io Dec 13 14:10:43.892772 env[1219]: time="2024-12-13T14:10:43.892632421Z" level=info msg="cleaning up dead shim" Dec 13 14:10:43.898367 env[1219]: time="2024-12-13T14:10:43.898336489Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3982 runtime=io.containerd.runc.v2\n" Dec 13 14:10:44.612318 kubelet[2030]: I1213 14:10:44.612288 2030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2caabb-90ed-49cb-9e1a-096cb399c501" path="/var/lib/kubelet/pods/0a2caabb-90ed-49cb-9e1a-096cb399c501/volumes" Dec 13 14:10:44.665241 kubelet[2030]: E1213 14:10:44.665194 2030 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:10:44.802911 kubelet[2030]: E1213 14:10:44.802761 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:44.806358 env[1219]: time="2024-12-13T14:10:44.805904891Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:10:44.819948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157879901.mount: Deactivated successfully. Dec 13 14:10:44.829487 env[1219]: time="2024-12-13T14:10:44.829439125Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cebfe9fd34e4f07c3289fbb4037ef49d3f2bf995b404042b55572d4e2da0a1ac\"" Dec 13 14:10:44.829934 env[1219]: time="2024-12-13T14:10:44.829913407Z" level=info msg="StartContainer for \"cebfe9fd34e4f07c3289fbb4037ef49d3f2bf995b404042b55572d4e2da0a1ac\"" Dec 13 14:10:44.845062 systemd[1]: Started cri-containerd-cebfe9fd34e4f07c3289fbb4037ef49d3f2bf995b404042b55572d4e2da0a1ac.scope. Dec 13 14:10:44.862998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787143796.mount: Deactivated successfully. Dec 13 14:10:44.880251 env[1219]: time="2024-12-13T14:10:44.880197650Z" level=info msg="StartContainer for \"cebfe9fd34e4f07c3289fbb4037ef49d3f2bf995b404042b55572d4e2da0a1ac\" returns successfully" Dec 13 14:10:44.882645 systemd[1]: cri-containerd-cebfe9fd34e4f07c3289fbb4037ef49d3f2bf995b404042b55572d4e2da0a1ac.scope: Deactivated successfully. Dec 13 14:10:44.898539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cebfe9fd34e4f07c3289fbb4037ef49d3f2bf995b404042b55572d4e2da0a1ac-rootfs.mount: Deactivated successfully. Dec 13 14:10:44.903838 env[1219]: time="2024-12-13T14:10:44.903782404Z" level=info msg="shim disconnected" id=cebfe9fd34e4f07c3289fbb4037ef49d3f2bf995b404042b55572d4e2da0a1ac Dec 13 14:10:44.903838 env[1219]: time="2024-12-13T14:10:44.903827564Z" level=warning msg="cleaning up after shim disconnected" id=cebfe9fd34e4f07c3289fbb4037ef49d3f2bf995b404042b55572d4e2da0a1ac namespace=k8s.io Dec 13 14:10:44.903838 env[1219]: time="2024-12-13T14:10:44.903836644Z" level=info msg="cleaning up dead shim" Dec 13 14:10:44.910376 env[1219]: time="2024-12-13T14:10:44.910345996Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4038 runtime=io.containerd.runc.v2\n" Dec 13 14:10:45.610312 kubelet[2030]: E1213 14:10:45.610276 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:45.805953 kubelet[2030]: E1213 14:10:45.805911 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:45.808213 env[1219]: time="2024-12-13T14:10:45.808181435Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:10:45.820848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912998462.mount: Deactivated successfully. Dec 13 14:10:45.824243 env[1219]: time="2024-12-13T14:10:45.823125065Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a545bb1bb379eec076a8f20f214e4582622fb884c2edbe24fcfef71b7a3c753\"" Dec 13 14:10:45.824765 env[1219]: time="2024-12-13T14:10:45.824728273Z" level=info msg="StartContainer for \"2a545bb1bb379eec076a8f20f214e4582622fb884c2edbe24fcfef71b7a3c753\"" Dec 13 14:10:45.837245 systemd[1]: Started cri-containerd-2a545bb1bb379eec076a8f20f214e4582622fb884c2edbe24fcfef71b7a3c753.scope. Dec 13 14:10:45.865876 env[1219]: time="2024-12-13T14:10:45.865800066Z" level=info msg="StartContainer for \"2a545bb1bb379eec076a8f20f214e4582622fb884c2edbe24fcfef71b7a3c753\" returns successfully" Dec 13 14:10:45.866413 systemd[1]: cri-containerd-2a545bb1bb379eec076a8f20f214e4582622fb884c2edbe24fcfef71b7a3c753.scope: Deactivated successfully. Dec 13 14:10:45.880019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a545bb1bb379eec076a8f20f214e4582622fb884c2edbe24fcfef71b7a3c753-rootfs.mount: Deactivated successfully. Dec 13 14:10:45.885802 env[1219]: time="2024-12-13T14:10:45.885764760Z" level=info msg="shim disconnected" id=2a545bb1bb379eec076a8f20f214e4582622fb884c2edbe24fcfef71b7a3c753 Dec 13 14:10:45.885994 env[1219]: time="2024-12-13T14:10:45.885977481Z" level=warning msg="cleaning up after shim disconnected" id=2a545bb1bb379eec076a8f20f214e4582622fb884c2edbe24fcfef71b7a3c753 namespace=k8s.io Dec 13 14:10:45.886061 env[1219]: time="2024-12-13T14:10:45.886048761Z" level=info msg="cleaning up dead shim" Dec 13 14:10:45.893211 env[1219]: time="2024-12-13T14:10:45.893183875Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4091 runtime=io.containerd.runc.v2\n" Dec 13 14:10:46.455432 kubelet[2030]: I1213 14:10:46.454984 2030 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:10:46Z","lastTransitionTime":"2024-12-13T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:10:46.809670 kubelet[2030]: E1213 14:10:46.809575 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:46.814877 env[1219]: time="2024-12-13T14:10:46.814833915Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:10:46.825608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085634557.mount: Deactivated successfully. Dec 13 14:10:46.833159 env[1219]: time="2024-12-13T14:10:46.833099999Z" level=info msg="CreateContainer within sandbox \"ae5bbaea2bd0179b12fe2afb1f8ae9eb278f811e2a86cc191a4db9dd395b50e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f6499abfe8ef8ccd8d637edbb047b61866bf5608df626cf15a24b366234c2de5\"" Dec 13 14:10:46.833581 env[1219]: time="2024-12-13T14:10:46.833549321Z" level=info msg="StartContainer for \"f6499abfe8ef8ccd8d637edbb047b61866bf5608df626cf15a24b366234c2de5\"" Dec 13 14:10:46.846835 systemd[1]: Started cri-containerd-f6499abfe8ef8ccd8d637edbb047b61866bf5608df626cf15a24b366234c2de5.scope. Dec 13 14:10:46.862948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715928980.mount: Deactivated successfully. Dec 13 14:10:46.876402 env[1219]: time="2024-12-13T14:10:46.876345957Z" level=info msg="StartContainer for \"f6499abfe8ef8ccd8d637edbb047b61866bf5608df626cf15a24b366234c2de5\" returns successfully" Dec 13 14:10:46.890414 systemd[1]: run-containerd-runc-k8s.io-f6499abfe8ef8ccd8d637edbb047b61866bf5608df626cf15a24b366234c2de5-runc.FQJlgu.mount: Deactivated successfully. Dec 13 14:10:47.115423 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:10:47.814080 kubelet[2030]: E1213 14:10:47.814040 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:47.827559 kubelet[2030]: I1213 14:10:47.827498 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vxdpl" podStartSLOduration=5.827483541 podStartE2EDuration="5.827483541s" podCreationTimestamp="2024-12-13 14:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:10:47.827123699 +0000 UTC m=+83.310158076" watchObservedRunningTime="2024-12-13 14:10:47.827483541 +0000 UTC m=+83.310517918" Dec 13 14:10:49.140233 kubelet[2030]: E1213 14:10:49.140199 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:49.813431 systemd-networkd[1047]: lxc_health: Link UP Dec 13 14:10:49.823494 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:10:49.822723 systemd-networkd[1047]: lxc_health: Gained carrier Dec 13 14:10:50.159517 systemd[1]: run-containerd-runc-k8s.io-f6499abfe8ef8ccd8d637edbb047b61866bf5608df626cf15a24b366234c2de5-runc.8IY5zp.mount: Deactivated successfully. Dec 13 14:10:50.610706 kubelet[2030]: E1213 14:10:50.610596 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:51.141038 kubelet[2030]: E1213 14:10:51.141002 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:51.215579 systemd-networkd[1047]: lxc_health: Gained IPv6LL Dec 13 14:10:51.821254 kubelet[2030]: E1213 14:10:51.821220 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:52.372928 kubelet[2030]: E1213 14:10:52.372897 2030 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:36718->127.0.0.1:38783: read tcp 127.0.0.1:36718->127.0.0.1:38783: read: connection reset by peer Dec 13 14:10:54.451684 systemd[1]: run-containerd-runc-k8s.io-f6499abfe8ef8ccd8d637edbb047b61866bf5608df626cf15a24b366234c2de5-runc.tQ4899.mount: Deactivated successfully. Dec 13 14:10:54.500175 kubelet[2030]: E1213 14:10:54.500084 2030 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36728->127.0.0.1:38783: write tcp 127.0.0.1:36728->127.0.0.1:38783: write: broken pipe Dec 13 14:10:56.611470 kubelet[2030]: E1213 14:10:56.611372 2030 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:10:56.622145 sshd[3801]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:56.624735 systemd[1]: sshd@23-10.0.0.75:22-10.0.0.1:47768.service: Deactivated successfully. Dec 13 14:10:56.625506 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:10:56.625981 systemd-logind[1208]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:10:56.626615 systemd-logind[1208]: Removed session 24.