Dec 13 14:06:25.734824 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:06:25.734844 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:06:25.734852 kernel: efi: EFI v2.70 by EDK II Dec 13 14:06:25.734857 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Dec 13 14:06:25.734862 kernel: random: crng init done Dec 13 14:06:25.734868 kernel: ACPI: Early table checksum verification disabled Dec 13 14:06:25.734874 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Dec 13 14:06:25.734881 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:06:25.734887 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734892 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734897 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734902 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734908 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734913 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734921 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734927 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734933 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:25.734938 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 14:06:25.734944 kernel: NUMA: Failed to initialise from firmware Dec 13 14:06:25.734950 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:06:25.734955 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Dec 13 14:06:25.734961 kernel: Zone ranges: Dec 13 14:06:25.734966 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:06:25.734973 kernel: DMA32 empty Dec 13 14:06:25.734978 kernel: Normal empty Dec 13 14:06:25.734984 kernel: Movable zone start for each node Dec 13 14:06:25.734990 kernel: Early memory node ranges Dec 13 14:06:25.734995 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Dec 13 14:06:25.735001 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Dec 13 14:06:25.735007 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Dec 13 14:06:25.735014 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Dec 13 14:06:25.735019 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Dec 13 14:06:25.735025 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Dec 13 14:06:25.735031 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Dec 13 14:06:25.735036 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:06:25.735043 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 14:06:25.735049 kernel: psci: probing for conduit method from ACPI. Dec 13 14:06:25.735054 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:06:25.735060 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:06:25.735066 kernel: psci: Trusted OS migration not required Dec 13 14:06:25.735074 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:06:25.735080 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 14:06:25.735087 kernel: ACPI: SRAT not present Dec 13 14:06:25.735093 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:06:25.735100 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:06:25.735106 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 14:06:25.735112 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:06:25.735118 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:06:25.735124 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:06:25.735130 kernel: CPU features: detected: Spectre-v4 Dec 13 14:06:25.735136 kernel: CPU features: detected: Spectre-BHB Dec 13 14:06:25.735144 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:06:25.735150 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:06:25.735156 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:06:25.735162 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:06:25.735168 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 14:06:25.735174 kernel: Policy zone: DMA Dec 13 14:06:25.735182 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:06:25.735188 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:06:25.735194 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:06:25.735210 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:06:25.735216 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:06:25.735224 kernel: Memory: 2457396K/2572288K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 114892K reserved, 0K cma-reserved) Dec 13 14:06:25.735230 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:06:25.735236 kernel: trace event string verifier disabled Dec 13 14:06:25.735242 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:06:25.735249 kernel: rcu: RCU event tracing is enabled. Dec 13 14:06:25.735255 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:06:25.735261 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:06:25.735267 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:06:25.735273 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:06:25.735280 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:06:25.735286 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:06:25.735293 kernel: GICv3: 256 SPIs implemented Dec 13 14:06:25.735299 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:06:25.735305 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:06:25.735311 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:06:25.735317 kernel: GICv3: 16 PPIs implemented Dec 13 14:06:25.735323 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 14:06:25.735329 kernel: ACPI: SRAT not present Dec 13 14:06:25.735335 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 14:06:25.735341 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:06:25.735347 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:06:25.735354 kernel: GICv3: using LPI property table @0x00000000400d0000 Dec 13 14:06:25.735731 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Dec 13 14:06:25.735746 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:06:25.735753 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:06:25.735759 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:06:25.735765 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:06:25.735771 kernel: arm-pv: using stolen time PV Dec 13 14:06:25.735778 kernel: Console: colour dummy device 80x25 Dec 13 14:06:25.735784 kernel: ACPI: Core revision 20210730 Dec 13 14:06:25.735791 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:06:25.735798 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:06:25.735804 kernel: LSM: Security Framework initializing Dec 13 14:06:25.735811 kernel: SELinux: Initializing. Dec 13 14:06:25.735818 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:06:25.735824 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:06:25.735830 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:06:25.735837 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 14:06:25.735843 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 14:06:25.735849 kernel: Remapping and enabling EFI services. Dec 13 14:06:25.735855 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:06:25.735861 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:06:25.735869 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 14:06:25.735876 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Dec 13 14:06:25.735882 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:06:25.735888 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:06:25.735895 kernel: Detected PIPT I-cache on CPU2 Dec 13 14:06:25.735901 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 14:06:25.735907 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Dec 13 14:06:25.735914 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:06:25.735920 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 14:06:25.735926 kernel: Detected PIPT I-cache on CPU3 Dec 13 14:06:25.735933 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 14:06:25.735940 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Dec 13 14:06:25.735946 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:06:25.735952 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 14:06:25.735962 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:06:25.735970 kernel: SMP: Total of 4 processors activated. Dec 13 14:06:25.735976 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:06:25.735983 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:06:25.735990 kernel: CPU features: detected: Common not Private translations Dec 13 14:06:25.735996 kernel: CPU features: detected: CRC32 instructions Dec 13 14:06:25.736003 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:06:25.736009 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:06:25.736017 kernel: CPU features: detected: Privileged Access Never Dec 13 14:06:25.736024 kernel: CPU features: detected: RAS Extension Support Dec 13 14:06:25.736030 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:06:25.736037 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:06:25.736043 kernel: alternatives: patching kernel code Dec 13 14:06:25.736051 kernel: devtmpfs: initialized Dec 13 14:06:25.736057 kernel: KASLR enabled Dec 13 14:06:25.736064 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:06:25.736071 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:06:25.736080 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:06:25.736087 kernel: SMBIOS 3.0.0 present. Dec 13 14:06:25.736093 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 13 14:06:25.736100 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:06:25.736107 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:06:25.736115 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:06:25.736121 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:06:25.736128 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:06:25.736135 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Dec 13 14:06:25.736141 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:06:25.736148 kernel: cpuidle: using governor menu Dec 13 14:06:25.736154 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:06:25.736161 kernel: ASID allocator initialised with 32768 entries Dec 13 14:06:25.736168 kernel: ACPI: bus type PCI registered Dec 13 14:06:25.736175 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:06:25.736182 kernel: Serial: AMBA PL011 UART driver Dec 13 14:06:25.736188 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:06:25.736195 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:06:25.736215 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:06:25.736222 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:06:25.736229 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:06:25.736235 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:06:25.736242 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:06:25.736250 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:06:25.736257 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:06:25.736263 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:06:25.736270 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:06:25.736276 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:06:25.736283 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:06:25.736289 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:06:25.736296 kernel: ACPI: Interpreter enabled Dec 13 14:06:25.736302 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:06:25.736310 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:06:25.736317 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:06:25.736323 kernel: printk: console [ttyAMA0] enabled Dec 13 14:06:25.736330 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:06:25.736464 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:06:25.736540 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:06:25.736602 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:06:25.736661 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 14:06:25.736719 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 14:06:25.736728 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 14:06:25.736735 kernel: PCI host bridge to bus 0000:00 Dec 13 14:06:25.736801 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 14:06:25.736856 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:06:25.736908 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 14:06:25.736961 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:06:25.737032 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 14:06:25.737100 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:06:25.737190 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 14:06:25.737285 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 14:06:25.737389 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:06:25.737451 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:06:25.737520 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 14:06:25.737580 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 14:06:25.737667 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 14:06:25.737748 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:06:25.737805 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 14:06:25.737814 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:06:25.737821 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:06:25.737828 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:06:25.737837 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:06:25.737844 kernel: iommu: Default domain type: Translated Dec 13 14:06:25.737850 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:06:25.737857 kernel: vgaarb: loaded Dec 13 14:06:25.737863 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:06:25.737870 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:06:25.737876 kernel: PTP clock support registered Dec 13 14:06:25.737883 kernel: Registered efivars operations Dec 13 14:06:25.737890 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:06:25.737897 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:06:25.737904 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:06:25.737910 kernel: pnp: PnP ACPI init Dec 13 14:06:25.737981 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 14:06:25.737991 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:06:25.737998 kernel: NET: Registered PF_INET protocol family Dec 13 14:06:25.738005 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:06:25.738012 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:06:25.738020 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:06:25.738027 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:06:25.738034 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:06:25.738041 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:06:25.738047 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:06:25.738054 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:06:25.738060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:06:25.738067 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:06:25.738074 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:06:25.738081 kernel: kvm [1]: HYP mode not available Dec 13 14:06:25.738088 kernel: Initialise system trusted keyrings Dec 13 14:06:25.738094 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:06:25.738101 kernel: Key type asymmetric registered Dec 13 14:06:25.738107 kernel: Asymmetric key parser 'x509' registered Dec 13 14:06:25.738114 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:06:25.738121 kernel: io scheduler mq-deadline registered Dec 13 14:06:25.738127 kernel: io scheduler kyber registered Dec 13 14:06:25.738134 kernel: io scheduler bfq registered Dec 13 14:06:25.738142 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:06:25.738148 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:06:25.738155 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:06:25.738240 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 14:06:25.738250 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:06:25.738257 kernel: thunder_xcv, ver 1.0 Dec 13 14:06:25.738264 kernel: thunder_bgx, ver 1.0 Dec 13 14:06:25.738270 kernel: nicpf, ver 1.0 Dec 13 14:06:25.738277 kernel: nicvf, ver 1.0 Dec 13 14:06:25.738347 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:06:25.738403 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:06:25 UTC (1734098785) Dec 13 14:06:25.738412 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:06:25.738418 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:06:25.738425 kernel: Segment Routing with IPv6 Dec 13 14:06:25.738432 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:06:25.738438 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:06:25.738445 kernel: Key type dns_resolver registered Dec 13 14:06:25.738453 kernel: registered taskstats version 1 Dec 13 14:06:25.738460 kernel: Loading compiled-in X.509 certificates Dec 13 14:06:25.738466 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:06:25.738479 kernel: Key type .fscrypt registered Dec 13 14:06:25.738486 kernel: Key type fscrypt-provisioning registered Dec 13 14:06:25.738493 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:06:25.738499 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:06:25.738506 kernel: ima: No architecture policies found Dec 13 14:06:25.738512 kernel: clk: Disabling unused clocks Dec 13 14:06:25.738520 kernel: Freeing unused kernel memory: 36416K Dec 13 14:06:25.738527 kernel: Run /init as init process Dec 13 14:06:25.738533 kernel: with arguments: Dec 13 14:06:25.738540 kernel: /init Dec 13 14:06:25.738546 kernel: with environment: Dec 13 14:06:25.738553 kernel: HOME=/ Dec 13 14:06:25.738559 kernel: TERM=linux Dec 13 14:06:25.738565 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:06:25.738574 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:06:25.738584 systemd[1]: Detected virtualization kvm. Dec 13 14:06:25.738591 systemd[1]: Detected architecture arm64. Dec 13 14:06:25.738597 systemd[1]: Running in initrd. Dec 13 14:06:25.738604 systemd[1]: No hostname configured, using default hostname. Dec 13 14:06:25.738611 systemd[1]: Hostname set to . Dec 13 14:06:25.738619 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:06:25.738626 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:06:25.738634 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:06:25.738641 systemd[1]: Reached target cryptsetup.target. Dec 13 14:06:25.738647 systemd[1]: Reached target paths.target. Dec 13 14:06:25.738654 systemd[1]: Reached target slices.target. Dec 13 14:06:25.738661 systemd[1]: Reached target swap.target. Dec 13 14:06:25.738668 systemd[1]: Reached target timers.target. Dec 13 14:06:25.738675 systemd[1]: Listening on iscsid.socket. Dec 13 14:06:25.738683 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:06:25.738690 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:06:25.738697 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:06:25.738704 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:06:25.738711 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:06:25.738718 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:06:25.738725 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:06:25.738732 systemd[1]: Reached target sockets.target. Dec 13 14:06:25.738739 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:06:25.738747 systemd[1]: Finished network-cleanup.service. Dec 13 14:06:25.738754 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:06:25.738760 systemd[1]: Starting systemd-journald.service... Dec 13 14:06:25.738767 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:06:25.738774 systemd[1]: Starting systemd-resolved.service... Dec 13 14:06:25.738781 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:06:25.738788 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:06:25.738795 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:06:25.738802 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:06:25.738810 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:06:25.738817 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:06:25.738823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:06:25.738833 systemd-journald[290]: Journal started Dec 13 14:06:25.738873 systemd-journald[290]: Runtime Journal (/run/log/journal/e1f14eb29baa4d6787e036ed13460ad3) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:06:25.729252 systemd-modules-load[291]: Inserted module 'overlay' Dec 13 14:06:25.743815 kernel: audit: type=1130 audit(1734098785.738:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.743834 systemd[1]: Started systemd-journald.service. Dec 13 14:06:25.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.748251 kernel: audit: type=1130 audit(1734098785.744:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.751023 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:06:25.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.754955 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:06:25.758541 kernel: audit: type=1130 audit(1734098785.752:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.758557 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:06:25.758013 systemd-resolved[292]: Positive Trust Anchors: Dec 13 14:06:25.758021 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:06:25.758048 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:06:25.762658 systemd-resolved[292]: Defaulting to hostname 'linux'. Dec 13 14:06:25.769779 kernel: Bridge firewalling registered Dec 13 14:06:25.769795 kernel: audit: type=1130 audit(1734098785.766:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.763367 systemd[1]: Started systemd-resolved.service. Dec 13 14:06:25.766055 systemd-modules-load[291]: Inserted module 'br_netfilter' Dec 13 14:06:25.766859 systemd[1]: Reached target nss-lookup.target. Dec 13 14:06:25.773088 dracut-cmdline[309]: dracut-dracut-053 Dec 13 14:06:25.775231 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:06:25.781212 kernel: SCSI subsystem initialized Dec 13 14:06:25.789179 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:06:25.789243 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:06:25.789255 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:06:25.791425 systemd-modules-load[291]: Inserted module 'dm_multipath' Dec 13 14:06:25.792149 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:06:25.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.793642 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:06:25.797257 kernel: audit: type=1130 audit(1734098785.792:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.802157 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:06:25.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.806221 kernel: audit: type=1130 audit(1734098785.802:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.836214 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:06:25.848231 kernel: iscsi: registered transport (tcp) Dec 13 14:06:25.862327 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:06:25.862346 kernel: QLogic iSCSI HBA Driver Dec 13 14:06:25.894746 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:06:25.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.896308 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:06:25.899697 kernel: audit: type=1130 audit(1734098785.895:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:25.939233 kernel: raid6: neonx8 gen() 13645 MB/s Dec 13 14:06:25.956217 kernel: raid6: neonx8 xor() 10656 MB/s Dec 13 14:06:25.973218 kernel: raid6: neonx4 gen() 13475 MB/s Dec 13 14:06:25.990216 kernel: raid6: neonx4 xor() 11219 MB/s Dec 13 14:06:26.007223 kernel: raid6: neonx2 gen() 12950 MB/s Dec 13 14:06:26.024227 kernel: raid6: neonx2 xor() 10594 MB/s Dec 13 14:06:26.041228 kernel: raid6: neonx1 gen() 10486 MB/s Dec 13 14:06:26.058225 kernel: raid6: neonx1 xor() 8746 MB/s Dec 13 14:06:26.075218 kernel: raid6: int64x8 gen() 6254 MB/s Dec 13 14:06:26.092225 kernel: raid6: int64x8 xor() 3529 MB/s Dec 13 14:06:26.109227 kernel: raid6: int64x4 gen() 7186 MB/s Dec 13 14:06:26.126229 kernel: raid6: int64x4 xor() 3837 MB/s Dec 13 14:06:26.143227 kernel: raid6: int64x2 gen() 6120 MB/s Dec 13 14:06:26.160227 kernel: raid6: int64x2 xor() 3305 MB/s Dec 13 14:06:26.177228 kernel: raid6: int64x1 gen() 5031 MB/s Dec 13 14:06:26.194271 kernel: raid6: int64x1 xor() 2635 MB/s Dec 13 14:06:26.194291 kernel: raid6: using algorithm neonx8 gen() 13645 MB/s Dec 13 14:06:26.194308 kernel: raid6: .... xor() 10656 MB/s, rmw enabled Dec 13 14:06:26.195296 kernel: raid6: using neon recovery algorithm Dec 13 14:06:26.205229 kernel: xor: measuring software checksum speed Dec 13 14:06:26.206399 kernel: 8regs : 15245 MB/sec Dec 13 14:06:26.206411 kernel: 32regs : 20697 MB/sec Dec 13 14:06:26.207628 kernel: arm64_neon : 25623 MB/sec Dec 13 14:06:26.207650 kernel: xor: using function: arm64_neon (25623 MB/sec) Dec 13 14:06:26.259226 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:06:26.269521 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:06:26.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:26.273000 audit: BPF prog-id=7 op=LOAD Dec 13 14:06:26.273874 kernel: audit: type=1130 audit(1734098786.270:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:26.273909 kernel: audit: type=1334 audit(1734098786.273:10): prog-id=7 op=LOAD Dec 13 14:06:26.273000 audit: BPF prog-id=8 op=LOAD Dec 13 14:06:26.274271 systemd[1]: Starting systemd-udevd.service... Dec 13 14:06:26.290118 systemd-udevd[492]: Using default interface naming scheme 'v252'. Dec 13 14:06:26.293407 systemd[1]: Started systemd-udevd.service. Dec 13 14:06:26.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:26.295262 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:06:26.306282 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Dec 13 14:06:26.331260 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:06:26.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:26.332713 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:06:26.365961 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:06:26.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:26.393217 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:06:26.397282 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:06:26.397297 kernel: GPT:9289727 != 19775487 Dec 13 14:06:26.397311 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:06:26.397320 kernel: GPT:9289727 != 19775487 Dec 13 14:06:26.397328 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:06:26.397337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:06:26.410083 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:06:26.411276 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:06:26.415140 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (543) Dec 13 14:06:26.416761 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:06:26.422052 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:06:26.427229 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:06:26.428893 systemd[1]: Starting disk-uuid.service... Dec 13 14:06:26.434636 disk-uuid[561]: Primary Header is updated. Dec 13 14:06:26.434636 disk-uuid[561]: Secondary Entries is updated. Dec 13 14:06:26.434636 disk-uuid[561]: Secondary Header is updated. Dec 13 14:06:26.437604 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:06:27.447221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:06:27.447365 disk-uuid[562]: The operation has completed successfully. Dec 13 14:06:27.467870 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:06:27.469016 systemd[1]: Finished disk-uuid.service. Dec 13 14:06:27.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.473742 systemd[1]: Starting verity-setup.service... Dec 13 14:06:27.491227 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:06:27.509584 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:06:27.511654 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:06:27.513392 systemd[1]: Finished verity-setup.service. Dec 13 14:06:27.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.560021 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:06:27.561323 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:06:27.560853 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:06:27.561530 systemd[1]: Starting ignition-setup.service... Dec 13 14:06:27.563706 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:06:27.569441 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:06:27.569480 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:06:27.569490 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:06:27.577406 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:06:27.583081 systemd[1]: Finished ignition-setup.service. Dec 13 14:06:27.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.584590 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:06:27.644312 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:06:27.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.647000 audit: BPF prog-id=9 op=LOAD Dec 13 14:06:27.647859 systemd[1]: Starting systemd-networkd.service... Dec 13 14:06:27.668351 ignition[646]: Ignition 2.14.0 Dec 13 14:06:27.668362 ignition[646]: Stage: fetch-offline Dec 13 14:06:27.668401 ignition[646]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:27.668410 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:06:27.668594 ignition[646]: parsed url from cmdline: "" Dec 13 14:06:27.668597 ignition[646]: no config URL provided Dec 13 14:06:27.668602 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:06:27.668610 ignition[646]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:06:27.668628 ignition[646]: op(1): [started] loading QEMU firmware config module Dec 13 14:06:27.668633 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:06:27.671736 ignition[646]: op(1): [finished] loading QEMU firmware config module Dec 13 14:06:27.681419 ignition[646]: parsing config with SHA512: 773463d06199e66b19bbc13880d56113241bf3c50c8190f74939812a6bc7cb596f071a71254f42194736c627bd27de5f127bdc55d16fb8dc9dc237485d2d24bb Dec 13 14:06:27.685349 systemd-networkd[739]: lo: Link UP Dec 13 14:06:27.686107 systemd-networkd[739]: lo: Gained carrier Dec 13 14:06:27.686876 unknown[646]: fetched base config from "system" Dec 13 14:06:27.686887 unknown[646]: fetched user config from "qemu" Dec 13 14:06:27.687162 ignition[646]: fetch-offline: fetch-offline passed Dec 13 14:06:27.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.688006 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:06:27.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.687226 ignition[646]: Ignition finished successfully Dec 13 14:06:27.688599 systemd-networkd[739]: Enumeration completed Dec 13 14:06:27.688961 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:27.689362 systemd[1]: Started systemd-networkd.service. Dec 13 14:06:27.690375 systemd-networkd[739]: eth0: Link UP Dec 13 14:06:27.690379 systemd-networkd[739]: eth0: Gained carrier Dec 13 14:06:27.690920 systemd[1]: Reached target network.target. Dec 13 14:06:27.692220 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:06:27.692915 systemd[1]: Starting ignition-kargs.service... Dec 13 14:06:27.694259 systemd[1]: Starting iscsiuio.service... Dec 13 14:06:27.702124 ignition[743]: Ignition 2.14.0 Dec 13 14:06:27.702130 ignition[743]: Stage: kargs Dec 13 14:06:27.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.703305 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:06:27.702252 ignition[743]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:27.703637 systemd[1]: Started iscsiuio.service. Dec 13 14:06:27.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.702264 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:06:27.705559 systemd[1]: Starting iscsid.service... Dec 13 14:06:27.711648 iscsid[752]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:06:27.711648 iscsid[752]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:06:27.711648 iscsid[752]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:06:27.711648 iscsid[752]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:06:27.711648 iscsid[752]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:06:27.711648 iscsid[752]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:06:27.711648 iscsid[752]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:06:27.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.702949 ignition[743]: kargs: kargs passed Dec 13 14:06:27.707616 systemd[1]: Finished ignition-kargs.service. Dec 13 14:06:27.702995 ignition[743]: Ignition finished successfully Dec 13 14:06:27.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.709615 systemd[1]: Starting ignition-disks.service... Dec 13 14:06:27.715980 ignition[753]: Ignition 2.14.0 Dec 13 14:06:27.711535 systemd[1]: Started iscsid.service. Dec 13 14:06:27.715986 ignition[753]: Stage: disks Dec 13 14:06:27.712964 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:06:27.716068 ignition[753]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:27.721602 systemd[1]: Finished ignition-disks.service. Dec 13 14:06:27.716077 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:06:27.724787 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:06:27.717036 ignition[753]: disks: disks passed Dec 13 14:06:27.726748 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:06:27.717077 ignition[753]: Ignition finished successfully Dec 13 14:06:27.728388 systemd[1]: Reached target local-fs.target. Dec 13 14:06:27.730125 systemd[1]: Reached target sysinit.target. Dec 13 14:06:27.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.731869 systemd[1]: Reached target basic.target. Dec 13 14:06:27.733333 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:06:27.734484 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:06:27.735722 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:06:27.737055 systemd[1]: Reached target remote-fs.target. Dec 13 14:06:27.738894 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:06:27.746365 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:06:27.748299 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:06:27.759409 systemd-fsck[774]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:06:27.767729 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:06:27.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.769755 systemd[1]: Mounting sysroot.mount... Dec 13 14:06:27.776960 systemd[1]: Mounted sysroot.mount. Dec 13 14:06:27.778096 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:06:27.777666 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:06:27.779806 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:06:27.780647 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:06:27.780685 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:06:27.780707 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:06:27.782365 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:06:27.784105 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:06:27.788044 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:06:27.791030 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:06:27.794958 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:06:27.798597 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:06:27.822596 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:06:27.824016 systemd[1]: Starting ignition-mount.service... Dec 13 14:06:27.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.825304 systemd[1]: Starting sysroot-boot.service... Dec 13 14:06:27.829035 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:06:27.837096 ignition[827]: INFO : Ignition 2.14.0 Dec 13 14:06:27.837096 ignition[827]: INFO : Stage: mount Dec 13 14:06:27.839111 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:27.839111 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:06:27.839111 ignition[827]: INFO : mount: mount passed Dec 13 14:06:27.839111 ignition[827]: INFO : Ignition finished successfully Dec 13 14:06:27.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:27.839866 systemd[1]: Finished ignition-mount.service. Dec 13 14:06:27.845083 systemd[1]: Finished sysroot-boot.service. Dec 13 14:06:27.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:28.520069 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:06:28.526786 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) Dec 13 14:06:28.526814 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:06:28.526823 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:06:28.527412 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:06:28.530657 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:06:28.532126 systemd[1]: Starting ignition-files.service... Dec 13 14:06:28.545105 ignition[855]: INFO : Ignition 2.14.0 Dec 13 14:06:28.545105 ignition[855]: INFO : Stage: files Dec 13 14:06:28.546708 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:28.546708 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:06:28.546708 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:06:28.549905 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:06:28.549905 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:06:28.552555 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:06:28.552555 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:06:28.555131 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:06:28.555131 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:06:28.555131 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:06:28.555131 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:06:28.555131 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:06:28.555131 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:06:28.555131 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:06:28.555131 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:06:28.555131 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:06:28.553081 unknown[855]: wrote ssh authorized keys file for user: core Dec 13 14:06:28.808119 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 14:06:29.079864 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:06:29.081937 ignition[855]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 14:06:29.083323 ignition[855]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:06:29.085322 ignition[855]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:06:29.087046 ignition[855]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 14:06:29.087046 ignition[855]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:06:29.087046 ignition[855]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:06:29.116889 ignition[855]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:06:29.119159 ignition[855]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:06:29.119159 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:06:29.119159 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:06:29.119159 ignition[855]: INFO : files: files passed Dec 13 14:06:29.119159 ignition[855]: INFO : Ignition finished successfully Dec 13 14:06:29.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.119259 systemd[1]: Finished ignition-files.service. Dec 13 14:06:29.122073 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:06:29.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.123477 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:06:29.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.132456 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:06:29.124098 systemd[1]: Starting ignition-quench.service... Dec 13 14:06:29.135633 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:06:29.127555 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:06:29.127634 systemd[1]: Finished ignition-quench.service. Dec 13 14:06:29.129188 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:06:29.130857 systemd[1]: Reached target ignition-complete.target. Dec 13 14:06:29.133671 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:06:29.145473 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:06:29.145563 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:06:29.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.147152 systemd[1]: Reached target initrd-fs.target. Dec 13 14:06:29.148425 systemd[1]: Reached target initrd.target. Dec 13 14:06:29.149699 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:06:29.150381 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:06:29.160145 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:06:29.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.161678 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:06:29.168993 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:06:29.169869 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:06:29.171249 systemd[1]: Stopped target timers.target. Dec 13 14:06:29.172546 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:06:29.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.172649 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:06:29.173870 systemd[1]: Stopped target initrd.target. Dec 13 14:06:29.175256 systemd[1]: Stopped target basic.target. Dec 13 14:06:29.176537 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:06:29.177804 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:06:29.179038 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:06:29.180474 systemd[1]: Stopped target remote-fs.target. Dec 13 14:06:29.181786 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:06:29.183126 systemd[1]: Stopped target sysinit.target. Dec 13 14:06:29.184399 systemd[1]: Stopped target local-fs.target. Dec 13 14:06:29.185672 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:06:29.186896 systemd[1]: Stopped target swap.target. Dec 13 14:06:29.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.188056 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:06:29.188163 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:06:29.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.189490 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:06:29.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.190634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:06:29.190733 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:06:29.192147 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:06:29.192277 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:06:29.193580 systemd[1]: Stopped target paths.target. Dec 13 14:06:29.194744 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:06:29.199225 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:06:29.200306 systemd[1]: Stopped target slices.target. Dec 13 14:06:29.201627 systemd[1]: Stopped target sockets.target. Dec 13 14:06:29.202861 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:06:29.202932 systemd[1]: Closed iscsid.socket. Dec 13 14:06:29.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.204019 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:06:29.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.204115 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:06:29.205564 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:06:29.205654 systemd[1]: Stopped ignition-files.service. Dec 13 14:06:29.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.207536 systemd[1]: Stopping ignition-mount.service... Dec 13 14:06:29.208452 systemd[1]: Stopping iscsiuio.service... Dec 13 14:06:29.210167 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:06:29.210312 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:06:29.212555 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:06:29.213728 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:06:29.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.218870 ignition[895]: INFO : Ignition 2.14.0 Dec 13 14:06:29.218870 ignition[895]: INFO : Stage: umount Dec 13 14:06:29.218870 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:29.218870 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:06:29.218870 ignition[895]: INFO : umount: umount passed Dec 13 14:06:29.218870 ignition[895]: INFO : Ignition finished successfully Dec 13 14:06:29.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.213852 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:06:29.218082 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:06:29.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.218175 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:06:29.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.221151 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:06:29.221263 systemd[1]: Stopped iscsiuio.service. Dec 13 14:06:29.225768 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:06:29.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.226248 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:06:29.226339 systemd[1]: Stopped ignition-mount.service. Dec 13 14:06:29.227439 systemd[1]: Stopped target network.target. Dec 13 14:06:29.229388 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:06:29.229433 systemd[1]: Closed iscsiuio.socket. Dec 13 14:06:29.230139 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:06:29.230181 systemd[1]: Stopped ignition-disks.service. Dec 13 14:06:29.231026 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:06:29.231067 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:06:29.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.231891 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:06:29.231928 systemd[1]: Stopped ignition-setup.service. Dec 13 14:06:29.232841 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:06:29.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.234083 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:06:29.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.235570 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:06:29.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.235649 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:06:29.242244 systemd-networkd[739]: eth0: DHCPv6 lease lost Dec 13 14:06:29.243323 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:06:29.243417 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:06:29.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.258000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:06:29.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.245222 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:06:29.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.245251 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:06:29.246860 systemd[1]: Stopping network-cleanup.service... Dec 13 14:06:29.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.263000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:06:29.247611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:06:29.247665 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:06:29.249021 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:06:29.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.249058 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:06:29.250952 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:06:29.250990 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:06:29.251943 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:06:29.255700 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:06:29.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.256133 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:06:29.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.256242 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:06:29.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.257655 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:06:29.257730 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:06:29.260040 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:06:29.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.260081 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:06:29.261966 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:06:29.262053 systemd[1]: Stopped network-cleanup.service. Dec 13 14:06:29.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.265239 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:06:29.265349 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:06:29.266741 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:06:29.266777 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:06:29.269613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:06:29.269643 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:06:29.270958 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:06:29.271000 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:06:29.272286 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:06:29.272325 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:06:29.273930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:06:29.273968 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:06:29.276035 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:06:29.277554 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:06:29.277604 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:06:29.281263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:06:29.281344 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:06:29.282283 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:06:29.284127 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:06:29.290312 systemd[1]: Switching root. Dec 13 14:06:29.307803 iscsid[752]: iscsid shutting down. Dec 13 14:06:29.308468 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Dec 13 14:06:29.308514 systemd-journald[290]: Journal stopped Dec 13 14:06:31.297601 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:06:31.297654 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:06:31.297666 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:06:31.297676 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:06:31.297686 kernel: SELinux: policy capability open_perms=1 Dec 13 14:06:31.297701 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:06:31.297714 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:06:31.297724 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:06:31.297733 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:06:31.297744 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:06:31.297758 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:06:31.297770 systemd[1]: Successfully loaded SELinux policy in 33.097ms. Dec 13 14:06:31.297791 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.128ms. Dec 13 14:06:31.297803 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:06:31.297816 systemd[1]: Detected virtualization kvm. Dec 13 14:06:31.297827 systemd[1]: Detected architecture arm64. Dec 13 14:06:31.297838 systemd[1]: Detected first boot. Dec 13 14:06:31.297849 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:06:31.297860 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:06:31.297871 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:06:31.297882 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:31.297894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:31.297908 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:31.297919 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 13 14:06:31.297930 kernel: audit: type=1334 audit(1734098791.147:82): prog-id=12 op=LOAD Dec 13 14:06:31.297940 kernel: audit: type=1334 audit(1734098791.147:83): prog-id=3 op=UNLOAD Dec 13 14:06:31.297950 kernel: audit: type=1334 audit(1734098791.148:84): prog-id=13 op=LOAD Dec 13 14:06:31.297960 kernel: audit: type=1334 audit(1734098791.149:85): prog-id=14 op=LOAD Dec 13 14:06:31.297970 kernel: audit: type=1334 audit(1734098791.149:86): prog-id=4 op=UNLOAD Dec 13 14:06:31.297980 kernel: audit: type=1334 audit(1734098791.149:87): prog-id=5 op=UNLOAD Dec 13 14:06:31.297992 kernel: audit: type=1334 audit(1734098791.150:88): prog-id=15 op=LOAD Dec 13 14:06:31.298002 kernel: audit: type=1334 audit(1734098791.150:89): prog-id=12 op=UNLOAD Dec 13 14:06:31.298012 kernel: audit: type=1334 audit(1734098791.151:90): prog-id=16 op=LOAD Dec 13 14:06:31.298022 kernel: audit: type=1334 audit(1734098791.152:91): prog-id=17 op=LOAD Dec 13 14:06:31.298035 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:06:31.298046 systemd[1]: Stopped iscsid.service. Dec 13 14:06:31.298057 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:06:31.298069 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:06:31.298080 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:06:31.298091 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:06:31.298104 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:06:31.298115 systemd[1]: Created slice system-getty.slice. Dec 13 14:06:31.298125 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:06:31.298136 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:06:31.298148 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:06:31.298158 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:06:31.298169 systemd[1]: Created slice user.slice. Dec 13 14:06:31.298181 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:06:31.298192 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:06:31.298225 systemd[1]: Set up automount boot.automount. Dec 13 14:06:31.298237 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:06:31.298248 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:06:31.298259 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:06:31.298274 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:06:31.298291 systemd[1]: Reached target integritysetup.target. Dec 13 14:06:31.298301 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:06:31.298312 systemd[1]: Reached target remote-fs.target. Dec 13 14:06:31.298324 systemd[1]: Reached target slices.target. Dec 13 14:06:31.298335 systemd[1]: Reached target swap.target. Dec 13 14:06:31.298346 systemd[1]: Reached target torcx.target. Dec 13 14:06:31.298356 systemd[1]: Reached target veritysetup.target. Dec 13 14:06:31.298367 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:06:31.298379 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:06:31.298390 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:06:31.298402 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:06:31.298413 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:06:31.298424 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:06:31.298435 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:06:31.298445 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:06:31.298461 systemd[1]: Mounting media.mount... Dec 13 14:06:31.298474 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:06:31.298485 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:06:31.298497 systemd[1]: Mounting tmp.mount... Dec 13 14:06:31.298508 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:06:31.298518 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:31.298529 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:06:31.298539 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:06:31.298550 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:31.298560 systemd[1]: Starting modprobe@drm.service... Dec 13 14:06:31.298571 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:31.298581 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:06:31.298594 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:31.298605 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:06:31.298616 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:06:31.298627 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:06:31.298637 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:06:31.298648 kernel: fuse: init (API version 7.34) Dec 13 14:06:31.298659 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:06:31.298669 kernel: loop: module loaded Dec 13 14:06:31.298680 systemd[1]: Stopped systemd-journald.service. Dec 13 14:06:31.298691 systemd[1]: Starting systemd-journald.service... Dec 13 14:06:31.298702 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:06:31.298713 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:06:31.298723 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:06:31.298735 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:06:31.298746 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:06:31.298757 systemd[1]: Stopped verity-setup.service. Dec 13 14:06:31.298768 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:06:31.298779 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:06:31.298791 systemd[1]: Mounted media.mount. Dec 13 14:06:31.298803 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:06:31.298813 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:06:31.298823 systemd[1]: Mounted tmp.mount. Dec 13 14:06:31.298834 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:06:31.298845 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:06:31.298855 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:06:31.298866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:31.298880 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:31.298891 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:06:31.298902 systemd[1]: Finished modprobe@drm.service. Dec 13 14:06:31.298914 systemd-journald[997]: Journal started Dec 13 14:06:31.298955 systemd-journald[997]: Runtime Journal (/run/log/journal/e1f14eb29baa4d6787e036ed13460ad3) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:06:29.366000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:06:29.436000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:06:29.436000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:06:29.436000 audit: BPF prog-id=10 op=LOAD Dec 13 14:06:29.436000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:06:29.436000 audit: BPF prog-id=11 op=LOAD Dec 13 14:06:29.436000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:06:29.476000 audit[927]: AVC avc: denied { associate } for pid=927 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:06:29.476000 audit[927]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58ac a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:29.476000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:06:29.477000 audit[927]: AVC avc: denied { associate } for pid=927 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:06:29.477000 audit[927]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5985 a2=1ed a3=0 items=2 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:29.477000 audit: CWD cwd="/" Dec 13 14:06:29.477000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:29.477000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:29.477000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:06:31.147000 audit: BPF prog-id=12 op=LOAD Dec 13 14:06:31.147000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:06:31.148000 audit: BPF prog-id=13 op=LOAD Dec 13 14:06:31.149000 audit: BPF prog-id=14 op=LOAD Dec 13 14:06:31.149000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:06:31.149000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:06:31.150000 audit: BPF prog-id=15 op=LOAD Dec 13 14:06:31.150000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:06:31.151000 audit: BPF prog-id=16 op=LOAD Dec 13 14:06:31.152000 audit: BPF prog-id=17 op=LOAD Dec 13 14:06:31.152000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:06:31.152000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:06:31.153000 audit: BPF prog-id=18 op=LOAD Dec 13 14:06:31.153000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:06:31.153000 audit: BPF prog-id=19 op=LOAD Dec 13 14:06:31.154000 audit: BPF prog-id=20 op=LOAD Dec 13 14:06:31.154000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:06:31.154000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:06:31.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.165000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:06:31.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.262000 audit: BPF prog-id=21 op=LOAD Dec 13 14:06:31.262000 audit: BPF prog-id=22 op=LOAD Dec 13 14:06:31.262000 audit: BPF prog-id=23 op=LOAD Dec 13 14:06:31.262000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:06:31.262000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:06:31.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.292000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:06:31.292000 audit[997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff32c5ad0 a2=4000 a3=1 items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:31.292000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:06:31.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.146580 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:06:29.474964 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:31.146592 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:06:29.475222 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:06:31.155264 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:06:29.475241 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:06:31.300744 systemd[1]: Started systemd-journald.service. Dec 13 14:06:29.475271 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:06:29.475280 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:06:29.475306 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:06:31.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:29.475319 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:06:29.475517 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:06:29.475549 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:06:29.475560 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:06:29.475934 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:06:29.475968 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:06:31.301357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:29.475984 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:06:31.301512 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:29.475998 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:06:29.476014 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:06:29.476026 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:06:30.893883 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:30Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:06:30.894145 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:30Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:06:30.894261 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:30Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:06:30.894428 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:30Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:06:30.894485 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:30Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:06:30.894542 /usr/lib/systemd/system-generators/torcx-generator[927]: time="2024-12-13T14:06:30Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:06:31.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.302767 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:06:31.302971 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:06:31.304071 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:31.304230 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:31.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.305374 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:06:31.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.306427 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:06:31.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.307648 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:06:31.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.308933 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:06:31.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.310225 systemd[1]: Reached target network-pre.target. Dec 13 14:06:31.312085 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:06:31.314003 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:06:31.314842 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:06:31.317335 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:06:31.319243 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:06:31.320096 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:31.321082 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:06:31.326058 systemd-journald[997]: Time spent on flushing to /var/log/journal/e1f14eb29baa4d6787e036ed13460ad3 is 12.999ms for 978 entries. Dec 13 14:06:31.326058 systemd-journald[997]: System Journal (/var/log/journal/e1f14eb29baa4d6787e036ed13460ad3) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:06:31.358905 systemd-journald[997]: Received client request to flush runtime journal. Dec 13 14:06:31.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.321976 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:31.322922 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:06:31.325509 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:06:31.329371 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:06:31.360257 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:06:31.330407 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:06:31.331423 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:06:31.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.332423 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:06:31.336433 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:06:31.338630 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:06:31.343165 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:06:31.347439 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:06:31.359802 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:06:31.679668 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:06:31.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.680000 audit: BPF prog-id=24 op=LOAD Dec 13 14:06:31.681897 systemd[1]: Starting systemd-udevd.service... Dec 13 14:06:31.681000 audit: BPF prog-id=25 op=LOAD Dec 13 14:06:31.681000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:06:31.681000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:06:31.698336 systemd-udevd[1031]: Using default interface naming scheme 'v252'. Dec 13 14:06:31.711299 systemd[1]: Started systemd-udevd.service. Dec 13 14:06:31.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.712000 audit: BPF prog-id=26 op=LOAD Dec 13 14:06:31.714287 systemd[1]: Starting systemd-networkd.service... Dec 13 14:06:31.727000 audit: BPF prog-id=27 op=LOAD Dec 13 14:06:31.727000 audit: BPF prog-id=28 op=LOAD Dec 13 14:06:31.727000 audit: BPF prog-id=29 op=LOAD Dec 13 14:06:31.728076 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:06:31.731557 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Dec 13 14:06:31.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.764516 systemd[1]: Started systemd-userdbd.service. Dec 13 14:06:31.775955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:06:31.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.813069 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:06:31.815175 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:06:31.822598 systemd-networkd[1040]: lo: Link UP Dec 13 14:06:31.822606 systemd-networkd[1040]: lo: Gained carrier Dec 13 14:06:31.823095 systemd-networkd[1040]: Enumeration completed Dec 13 14:06:31.823185 systemd[1]: Started systemd-networkd.service. Dec 13 14:06:31.823208 systemd-networkd[1040]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:31.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.824423 systemd-networkd[1040]: eth0: Link UP Dec 13 14:06:31.824427 systemd-networkd[1040]: eth0: Gained carrier Dec 13 14:06:31.827838 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:06:31.846305 systemd-networkd[1040]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:06:31.859860 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:06:31.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.860840 systemd[1]: Reached target cryptsetup.target. Dec 13 14:06:31.862687 systemd[1]: Starting lvm2-activation.service... Dec 13 14:06:31.866208 lvm[1065]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:06:31.896965 systemd[1]: Finished lvm2-activation.service. Dec 13 14:06:31.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.897915 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:06:31.898769 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:06:31.898796 systemd[1]: Reached target local-fs.target. Dec 13 14:06:31.899562 systemd[1]: Reached target machines.target. Dec 13 14:06:31.901379 systemd[1]: Starting ldconfig.service... Dec 13 14:06:31.902401 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:31.902452 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:31.903385 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:06:31.906230 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:06:31.908147 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:06:31.910584 systemd[1]: Starting systemd-sysext.service... Dec 13 14:06:31.911598 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1067 (bootctl) Dec 13 14:06:31.912556 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:06:31.920137 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:06:31.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.923430 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:06:31.939498 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:06:31.939685 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:06:31.986046 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:06:31.987340 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:06:31.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:31.992224 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:06:32.004230 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:06:32.004823 systemd-fsck[1075]: fsck.fat 4.2 (2021-01-31) Dec 13 14:06:32.004823 systemd-fsck[1075]: /dev/vda1: 236 files, 117175/258078 clusters Dec 13 14:06:32.007009 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:06:32.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.009975 systemd[1]: Mounting boot.mount... Dec 13 14:06:32.017711 systemd[1]: Mounted boot.mount. Dec 13 14:06:32.020273 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:06:32.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.025000 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:06:32.025980 (sd-sysext)[1080]: Using extensions 'kubernetes'. Dec 13 14:06:32.026433 (sd-sysext)[1080]: Merged extensions into '/usr'. Dec 13 14:06:32.041658 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.042999 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:32.045339 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:32.047332 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:32.048313 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.048470 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:32.049261 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:32.049468 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:32.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.051160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:32.051297 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:32.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.052955 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:32.053111 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:32.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.054761 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:32.054890 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.083762 ldconfig[1066]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:06:32.087040 systemd[1]: Finished ldconfig.service. Dec 13 14:06:32.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.281788 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:06:32.286688 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:06:32.288512 systemd[1]: Finished systemd-sysext.service. Dec 13 14:06:32.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.290447 systemd[1]: Starting ensure-sysext.service... Dec 13 14:06:32.292095 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:06:32.296078 systemd[1]: Reloading. Dec 13 14:06:32.304499 systemd-tmpfiles[1088]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:06:32.306126 systemd-tmpfiles[1088]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:06:32.308711 systemd-tmpfiles[1088]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:06:32.326440 /usr/lib/systemd/system-generators/torcx-generator[1108]: time="2024-12-13T14:06:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:32.326477 /usr/lib/systemd/system-generators/torcx-generator[1108]: time="2024-12-13T14:06:32Z" level=info msg="torcx already run" Dec 13 14:06:32.382736 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:32.382757 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:32.397849 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:32.439000 audit: BPF prog-id=30 op=LOAD Dec 13 14:06:32.439000 audit: BPF prog-id=31 op=LOAD Dec 13 14:06:32.439000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:06:32.439000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:06:32.441000 audit: BPF prog-id=32 op=LOAD Dec 13 14:06:32.441000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:06:32.441000 audit: BPF prog-id=33 op=LOAD Dec 13 14:06:32.441000 audit: BPF prog-id=34 op=LOAD Dec 13 14:06:32.441000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:06:32.441000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:06:32.443000 audit: BPF prog-id=35 op=LOAD Dec 13 14:06:32.443000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:06:32.443000 audit: BPF prog-id=36 op=LOAD Dec 13 14:06:32.443000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:06:32.443000 audit: BPF prog-id=37 op=LOAD Dec 13 14:06:32.443000 audit: BPF prog-id=38 op=LOAD Dec 13 14:06:32.443000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:06:32.443000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:06:32.445692 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:06:32.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.450009 systemd[1]: Starting audit-rules.service... Dec 13 14:06:32.451941 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:06:32.453921 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:06:32.457000 audit: BPF prog-id=39 op=LOAD Dec 13 14:06:32.458937 systemd[1]: Starting systemd-resolved.service... Dec 13 14:06:32.461000 audit: BPF prog-id=40 op=LOAD Dec 13 14:06:32.462642 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:06:32.464558 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:06:32.469254 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.471788 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:32.472000 audit[1158]: SYSTEM_BOOT pid=1158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.473619 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:32.475425 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:32.476133 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.476291 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:32.477166 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:06:32.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.478482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:32.478596 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:32.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.479754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:32.479860 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:32.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.481186 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:32.481403 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:32.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.484278 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:32.484402 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.484477 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:32.486042 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:06:32.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.487932 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.489090 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:32.491067 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:32.492922 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:32.493658 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.493781 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:32.493882 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:32.494729 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:06:32.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.496673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:32.496799 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:32.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:32.497976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:32.498085 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:32.498000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:06:32.498000 audit[1170]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd3ec0110 a2=420 a3=0 items=0 ppid=1147 pid=1170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:32.498000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:06:32.499152 augenrules[1170]: No rules Dec 13 14:06:32.499429 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:32.499548 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:32.500727 systemd[1]: Finished audit-rules.service. Dec 13 14:06:32.503905 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.505057 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:32.507416 systemd[1]: Starting modprobe@drm.service... Dec 13 14:06:32.509346 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:32.511172 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:32.511916 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.512042 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:32.513225 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:06:32.515435 systemd[1]: Starting systemd-update-done.service... Dec 13 14:06:32.516260 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:32.517541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:32.517701 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:32.519017 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:06:32.519137 systemd[1]: Finished modprobe@drm.service. Dec 13 14:06:32.520098 systemd-resolved[1153]: Positive Trust Anchors: Dec 13 14:06:32.520152 systemd-resolved[1153]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:06:32.520181 systemd-resolved[1153]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:06:32.520316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:32.520423 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:32.521807 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:32.521915 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:32.523322 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:32.523411 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.524160 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:06:32.525080 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:06:32.525348 systemd-timesyncd[1157]: Initial clock synchronization to Fri 2024-12-13 14:06:32.232057 UTC. Dec 13 14:06:32.525895 systemd[1]: Finished ensure-sysext.service. Dec 13 14:06:32.526914 systemd[1]: Reached target time-set.target. Dec 13 14:06:32.527980 systemd[1]: Finished systemd-update-done.service. Dec 13 14:06:32.529564 systemd-resolved[1153]: Defaulting to hostname 'linux'. Dec 13 14:06:32.530845 systemd[1]: Started systemd-resolved.service. Dec 13 14:06:32.531672 systemd[1]: Reached target network.target. Dec 13 14:06:32.532519 systemd[1]: Reached target nss-lookup.target. Dec 13 14:06:32.533290 systemd[1]: Reached target sysinit.target. Dec 13 14:06:32.534094 systemd[1]: Started motdgen.path. Dec 13 14:06:32.534883 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:06:32.536110 systemd[1]: Started logrotate.timer. Dec 13 14:06:32.536941 systemd[1]: Started mdadm.timer. Dec 13 14:06:32.537630 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:06:32.538491 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:06:32.538524 systemd[1]: Reached target paths.target. Dec 13 14:06:32.539229 systemd[1]: Reached target timers.target. Dec 13 14:06:32.540253 systemd[1]: Listening on dbus.socket. Dec 13 14:06:32.541974 systemd[1]: Starting docker.socket... Dec 13 14:06:32.544899 systemd[1]: Listening on sshd.socket. Dec 13 14:06:32.545732 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:32.546154 systemd[1]: Listening on docker.socket. Dec 13 14:06:32.546997 systemd[1]: Reached target sockets.target. Dec 13 14:06:32.547782 systemd[1]: Reached target basic.target. Dec 13 14:06:32.548584 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.548615 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:06:32.549552 systemd[1]: Starting containerd.service... Dec 13 14:06:32.551193 systemd[1]: Starting dbus.service... Dec 13 14:06:32.552832 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:06:32.554779 systemd[1]: Starting extend-filesystems.service... Dec 13 14:06:32.555634 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:06:32.556896 systemd[1]: Starting motdgen.service... Dec 13 14:06:32.559389 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:06:32.563267 systemd[1]: Starting sshd-keygen.service... Dec 13 14:06:32.567261 systemd[1]: Starting systemd-logind.service... Dec 13 14:06:32.571712 jq[1189]: false Dec 13 14:06:32.568003 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:32.568091 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:06:32.568570 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:06:32.569166 systemd[1]: Starting update-engine.service... Dec 13 14:06:32.570851 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:06:32.573174 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:06:32.573368 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:06:32.573686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:06:32.573828 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:06:32.578304 jq[1207]: true Dec 13 14:06:32.582975 jq[1210]: true Dec 13 14:06:32.585241 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:06:32.585403 systemd[1]: Finished motdgen.service. Dec 13 14:06:32.588981 dbus-daemon[1188]: [system] SELinux support is enabled Dec 13 14:06:32.590036 extend-filesystems[1190]: Found loop1 Dec 13 14:06:32.590036 extend-filesystems[1190]: Found vda Dec 13 14:06:32.590036 extend-filesystems[1190]: Found vda1 Dec 13 14:06:32.590036 extend-filesystems[1190]: Found vda2 Dec 13 14:06:32.590036 extend-filesystems[1190]: Found vda3 Dec 13 14:06:32.590036 extend-filesystems[1190]: Found usr Dec 13 14:06:32.590036 extend-filesystems[1190]: Found vda4 Dec 13 14:06:32.590036 extend-filesystems[1190]: Found vda6 Dec 13 14:06:32.590036 extend-filesystems[1190]: Found vda7 Dec 13 14:06:32.590036 extend-filesystems[1190]: Found vda9 Dec 13 14:06:32.590036 extend-filesystems[1190]: Checking size of /dev/vda9 Dec 13 14:06:32.595126 systemd[1]: Started dbus.service. Dec 13 14:06:32.597597 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:06:32.597622 systemd[1]: Reached target system-config.target. Dec 13 14:06:32.598544 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:06:32.598559 systemd[1]: Reached target user-config.target. Dec 13 14:06:32.617070 extend-filesystems[1190]: Resized partition /dev/vda9 Dec 13 14:06:32.629302 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:06:32.629808 extend-filesystems[1235]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:06:32.629870 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:06:32.639219 systemd-logind[1200]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:06:32.639584 systemd-logind[1200]: New seat seat0. Dec 13 14:06:32.645728 systemd[1]: Started systemd-logind.service. Dec 13 14:06:32.648212 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:06:32.651682 update_engine[1202]: I1213 14:06:32.651393 1202 main.cc:92] Flatcar Update Engine starting Dec 13 14:06:32.654951 systemd[1]: Started update-engine.service. Dec 13 14:06:32.655080 update_engine[1202]: I1213 14:06:32.654953 1202 update_check_scheduler.cc:74] Next update check in 5m6s Dec 13 14:06:32.657677 systemd[1]: Started locksmithd.service. Dec 13 14:06:32.666225 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:06:32.675688 extend-filesystems[1235]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:06:32.675688 extend-filesystems[1235]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:06:32.675688 extend-filesystems[1235]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:06:32.679945 extend-filesystems[1190]: Resized filesystem in /dev/vda9 Dec 13 14:06:32.678644 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:06:32.682021 env[1209]: time="2024-12-13T14:06:32.676185200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:06:32.678815 systemd[1]: Finished extend-filesystems.service. Dec 13 14:06:32.697219 env[1209]: time="2024-12-13T14:06:32.696418720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:06:32.697219 env[1209]: time="2024-12-13T14:06:32.696606560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698149 env[1209]: time="2024-12-13T14:06:32.697860840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698149 env[1209]: time="2024-12-13T14:06:32.697902160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698149 env[1209]: time="2024-12-13T14:06:32.698096200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698149 env[1209]: time="2024-12-13T14:06:32.698113800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698149 env[1209]: time="2024-12-13T14:06:32.698126600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:06:32.698149 env[1209]: time="2024-12-13T14:06:32.698135560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698419 env[1209]: time="2024-12-13T14:06:32.698221280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698638 env[1209]: time="2024-12-13T14:06:32.698504280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698667 env[1209]: time="2024-12-13T14:06:32.698642560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:32.698667 env[1209]: time="2024-12-13T14:06:32.698659480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:06:32.698771 env[1209]: time="2024-12-13T14:06:32.698714680Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:06:32.698771 env[1209]: time="2024-12-13T14:06:32.698732960Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:06:32.701927 env[1209]: time="2024-12-13T14:06:32.701897240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:06:32.701927 env[1209]: time="2024-12-13T14:06:32.701929320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:06:32.702010 env[1209]: time="2024-12-13T14:06:32.701942400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:06:32.702010 env[1209]: time="2024-12-13T14:06:32.701978200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702010 env[1209]: time="2024-12-13T14:06:32.701992920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702110 env[1209]: time="2024-12-13T14:06:32.702010400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702110 env[1209]: time="2024-12-13T14:06:32.702071680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702426 env[1209]: time="2024-12-13T14:06:32.702406280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702471 env[1209]: time="2024-12-13T14:06:32.702429480Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702471 env[1209]: time="2024-12-13T14:06:32.702443400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702471 env[1209]: time="2024-12-13T14:06:32.702464880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702540 env[1209]: time="2024-12-13T14:06:32.702480160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:06:32.702610 env[1209]: time="2024-12-13T14:06:32.702590880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:06:32.702682 env[1209]: time="2024-12-13T14:06:32.702666120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:06:32.702942 env[1209]: time="2024-12-13T14:06:32.702922640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:06:32.702976 env[1209]: time="2024-12-13T14:06:32.702953000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.702976 env[1209]: time="2024-12-13T14:06:32.702967080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:06:32.703086 env[1209]: time="2024-12-13T14:06:32.703071800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703118 env[1209]: time="2024-12-13T14:06:32.703087200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703118 env[1209]: time="2024-12-13T14:06:32.703100040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703174 env[1209]: time="2024-12-13T14:06:32.703118880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703250 env[1209]: time="2024-12-13T14:06:32.703175440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703250 env[1209]: time="2024-12-13T14:06:32.703188280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703250 env[1209]: time="2024-12-13T14:06:32.703211320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703250 env[1209]: time="2024-12-13T14:06:32.703223600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703250 env[1209]: time="2024-12-13T14:06:32.703236520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:06:32.703383 env[1209]: time="2024-12-13T14:06:32.703363880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703410 env[1209]: time="2024-12-13T14:06:32.703389560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703410 env[1209]: time="2024-12-13T14:06:32.703402360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703448 env[1209]: time="2024-12-13T14:06:32.703413400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:06:32.703448 env[1209]: time="2024-12-13T14:06:32.703427000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:06:32.703448 env[1209]: time="2024-12-13T14:06:32.703438160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:06:32.703520 env[1209]: time="2024-12-13T14:06:32.703464680Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:06:32.703520 env[1209]: time="2024-12-13T14:06:32.703501800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:06:32.703725 env[1209]: time="2024-12-13T14:06:32.703680840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:06:32.704315 env[1209]: time="2024-12-13T14:06:32.703737840Z" level=info msg="Connect containerd service" Dec 13 14:06:32.704315 env[1209]: time="2024-12-13T14:06:32.703767040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:06:32.704426 env[1209]: time="2024-12-13T14:06:32.704401960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:06:32.704736 env[1209]: time="2024-12-13T14:06:32.704668800Z" level=info msg="Start subscribing containerd event" Dec 13 14:06:32.705742 env[1209]: time="2024-12-13T14:06:32.704965160Z" level=info msg="Start recovering state" Dec 13 14:06:32.705742 env[1209]: time="2024-12-13T14:06:32.705058600Z" level=info msg="Start event monitor" Dec 13 14:06:32.705742 env[1209]: time="2024-12-13T14:06:32.705245960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:06:32.705742 env[1209]: time="2024-12-13T14:06:32.705304760Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:06:32.705742 env[1209]: time="2024-12-13T14:06:32.705363560Z" level=info msg="containerd successfully booted in 0.034091s" Dec 13 14:06:32.705441 systemd[1]: Started containerd.service. Dec 13 14:06:32.706005 env[1209]: time="2024-12-13T14:06:32.705982720Z" level=info msg="Start snapshots syncer" Dec 13 14:06:32.706067 env[1209]: time="2024-12-13T14:06:32.706054720Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:06:32.706125 env[1209]: time="2024-12-13T14:06:32.706113120Z" level=info msg="Start streaming server" Dec 13 14:06:32.711924 locksmithd[1238]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:06:33.716192 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:06:33.732609 systemd[1]: Finished sshd-keygen.service. Dec 13 14:06:33.734777 systemd[1]: Starting issuegen.service... Dec 13 14:06:33.739050 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:06:33.739318 systemd[1]: Finished issuegen.service. Dec 13 14:06:33.741283 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:06:33.746814 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:06:33.748823 systemd[1]: Started getty@tty1.service. Dec 13 14:06:33.750734 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:06:33.751717 systemd[1]: Reached target getty.target. Dec 13 14:06:33.862351 systemd-networkd[1040]: eth0: Gained IPv6LL Dec 13 14:06:33.863902 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:06:33.865097 systemd[1]: Reached target network-online.target. Dec 13 14:06:33.867413 systemd[1]: Starting kubelet.service... Dec 13 14:06:34.341274 systemd[1]: Started kubelet.service. Dec 13 14:06:34.342448 systemd[1]: Reached target multi-user.target. Dec 13 14:06:34.344430 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:06:34.350876 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:06:34.351020 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:06:34.352090 systemd[1]: Startup finished in 567ms (kernel) + 3.764s (initrd) + 5.020s (userspace) = 9.352s. Dec 13 14:06:34.824869 kubelet[1264]: E1213 14:06:34.824757 1264 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:34.826898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:34.827029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:37.571896 systemd[1]: Created slice system-sshd.slice. Dec 13 14:06:37.572938 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:60520.service. Dec 13 14:06:37.612458 sshd[1274]: Accepted publickey for core from 10.0.0.1 port 60520 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:37.616323 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:37.624640 systemd-logind[1200]: New session 1 of user core. Dec 13 14:06:37.625497 systemd[1]: Created slice user-500.slice. Dec 13 14:06:37.626523 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:06:37.634261 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:06:37.635475 systemd[1]: Starting user@500.service... Dec 13 14:06:37.638136 (systemd)[1277]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:37.694136 systemd[1277]: Queued start job for default target default.target. Dec 13 14:06:37.694577 systemd[1277]: Reached target paths.target. Dec 13 14:06:37.694596 systemd[1277]: Reached target sockets.target. Dec 13 14:06:37.694607 systemd[1277]: Reached target timers.target. Dec 13 14:06:37.694616 systemd[1277]: Reached target basic.target. Dec 13 14:06:37.694668 systemd[1277]: Reached target default.target. Dec 13 14:06:37.694692 systemd[1277]: Startup finished in 51ms. Dec 13 14:06:37.694725 systemd[1]: Started user@500.service. Dec 13 14:06:37.695600 systemd[1]: Started session-1.scope. Dec 13 14:06:37.744523 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:60528.service. Dec 13 14:06:37.796244 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 60528 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:37.797724 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:37.802213 systemd[1]: Started session-2.scope. Dec 13 14:06:37.802362 systemd-logind[1200]: New session 2 of user core. Dec 13 14:06:37.854165 sshd[1286]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:37.857274 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:60538.service. Dec 13 14:06:37.857714 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:60528.service: Deactivated successfully. Dec 13 14:06:37.858378 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:06:37.858866 systemd-logind[1200]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:06:37.859558 systemd-logind[1200]: Removed session 2. Dec 13 14:06:37.892887 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:37.894028 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:37.896878 systemd-logind[1200]: New session 3 of user core. Dec 13 14:06:37.897636 systemd[1]: Started session-3.scope. Dec 13 14:06:37.945243 sshd[1291]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:37.947650 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:60538.service: Deactivated successfully. Dec 13 14:06:37.948172 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:06:37.948670 systemd-logind[1200]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:06:37.949654 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:60542.service. Dec 13 14:06:37.950320 systemd-logind[1200]: Removed session 3. Dec 13 14:06:37.984532 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 60542 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:37.985670 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:37.988631 systemd-logind[1200]: New session 4 of user core. Dec 13 14:06:37.989388 systemd[1]: Started session-4.scope. Dec 13 14:06:38.041233 sshd[1298]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:38.043673 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:60542.service: Deactivated successfully. Dec 13 14:06:38.044195 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:06:38.044670 systemd-logind[1200]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:06:38.045636 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:60556.service. Dec 13 14:06:38.046319 systemd-logind[1200]: Removed session 4. Dec 13 14:06:38.080831 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 60556 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:38.081971 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:38.085055 systemd-logind[1200]: New session 5 of user core. Dec 13 14:06:38.085799 systemd[1]: Started session-5.scope. Dec 13 14:06:38.140241 sudo[1307]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:06:38.140447 sudo[1307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:06:38.151255 systemd[1]: Starting coreos-metadata.service... Dec 13 14:06:38.157316 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 14:06:38.157475 systemd[1]: Finished coreos-metadata.service. Dec 13 14:06:38.631262 systemd[1]: Stopped kubelet.service. Dec 13 14:06:38.633142 systemd[1]: Starting kubelet.service... Dec 13 14:06:38.649457 systemd[1]: Reloading. Dec 13 14:06:38.695802 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2024-12-13T14:06:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:38.695834 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2024-12-13T14:06:38Z" level=info msg="torcx already run" Dec 13 14:06:38.759487 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:38.759505 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:38.774773 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:38.838289 systemd[1]: Started kubelet.service. Dec 13 14:06:38.839539 systemd[1]: Stopping kubelet.service... Dec 13 14:06:38.839776 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:06:38.839935 systemd[1]: Stopped kubelet.service. Dec 13 14:06:38.841394 systemd[1]: Starting kubelet.service... Dec 13 14:06:38.922050 systemd[1]: Started kubelet.service. Dec 13 14:06:38.962965 kubelet[1416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:06:38.963282 kubelet[1416]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:06:38.963328 kubelet[1416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:06:38.963471 kubelet[1416]: I1213 14:06:38.963439 1416 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:06:39.811314 kubelet[1416]: I1213 14:06:39.811277 1416 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:06:39.811314 kubelet[1416]: I1213 14:06:39.811309 1416 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:06:39.811515 kubelet[1416]: I1213 14:06:39.811501 1416 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:06:39.839835 kubelet[1416]: I1213 14:06:39.839807 1416 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:06:39.850780 kubelet[1416]: I1213 14:06:39.850743 1416 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:06:39.852474 kubelet[1416]: I1213 14:06:39.852438 1416 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:06:39.852651 kubelet[1416]: I1213 14:06:39.852628 1416 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:06:39.852651 kubelet[1416]: I1213 14:06:39.852651 1416 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:06:39.852743 kubelet[1416]: I1213 14:06:39.852659 1416 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:06:39.852791 kubelet[1416]: I1213 14:06:39.852777 1416 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:06:39.856677 kubelet[1416]: I1213 14:06:39.856653 1416 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:06:39.856716 kubelet[1416]: I1213 14:06:39.856683 1416 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:06:39.856716 kubelet[1416]: I1213 14:06:39.856705 1416 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:06:39.856756 kubelet[1416]: I1213 14:06:39.856720 1416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:06:39.856865 kubelet[1416]: E1213 14:06:39.856846 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:39.856981 kubelet[1416]: E1213 14:06:39.856968 1416 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:39.857631 kubelet[1416]: I1213 14:06:39.857603 1416 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:06:39.858204 kubelet[1416]: I1213 14:06:39.858176 1416 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:06:39.858338 kubelet[1416]: W1213 14:06:39.858326 1416 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:06:39.859138 kubelet[1416]: I1213 14:06:39.859122 1416 server.go:1256] "Started kubelet" Dec 13 14:06:39.859393 kubelet[1416]: I1213 14:06:39.859377 1416 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:06:39.860399 kubelet[1416]: I1213 14:06:39.860268 1416 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:06:39.862437 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:06:39.863754 kubelet[1416]: I1213 14:06:39.863726 1416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:06:39.863922 kubelet[1416]: I1213 14:06:39.863900 1416 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:06:39.865081 kubelet[1416]: I1213 14:06:39.865056 1416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:06:39.865528 kubelet[1416]: I1213 14:06:39.865496 1416 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:06:39.867700 kubelet[1416]: I1213 14:06:39.867666 1416 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:06:39.867767 kubelet[1416]: I1213 14:06:39.867746 1416 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:06:39.867863 kubelet[1416]: E1213 14:06:39.867836 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:39.868378 kubelet[1416]: E1213 14:06:39.868362 1416 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:06:39.868621 kubelet[1416]: I1213 14:06:39.868603 1416 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:06:39.868792 kubelet[1416]: I1213 14:06:39.868772 1416 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:06:39.871267 kubelet[1416]: W1213 14:06:39.871243 1416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:06:39.871362 kubelet[1416]: E1213 14:06:39.871351 1416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:06:39.871621 kubelet[1416]: W1213 14:06:39.871601 1416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:06:39.871719 kubelet[1416]: E1213 14:06:39.871705 1416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:06:39.871875 kubelet[1416]: W1213 14:06:39.871852 1416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:06:39.871952 kubelet[1416]: E1213 14:06:39.871941 1416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:06:39.872000 kubelet[1416]: I1213 14:06:39.871895 1416 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:06:39.872195 kubelet[1416]: E1213 14:06:39.872157 1416 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.69\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 14:06:39.873696 kubelet[1416]: E1213 14:06:39.873670 1416 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.69.1810c1abbb0863b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.69,UID:10.0.0.69,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.69,},FirstTimestamp:2024-12-13 14:06:39.859098551 +0000 UTC m=+0.932291679,LastTimestamp:2024-12-13 14:06:39.859098551 +0000 UTC m=+0.932291679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.69,}" Dec 13 14:06:39.880901 kubelet[1416]: I1213 14:06:39.880886 1416 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:06:39.881225 kubelet[1416]: I1213 14:06:39.881211 1416 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:06:39.881351 kubelet[1416]: I1213 14:06:39.881342 1416 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:06:39.961784 kubelet[1416]: I1213 14:06:39.961753 1416 policy_none.go:49] "None policy: Start" Dec 13 14:06:39.962746 kubelet[1416]: I1213 14:06:39.962726 1416 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:06:39.962840 kubelet[1416]: I1213 14:06:39.962828 1416 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:06:39.967041 systemd[1]: Created slice kubepods.slice. Dec 13 14:06:39.968777 kubelet[1416]: I1213 14:06:39.968757 1416 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.69" Dec 13 14:06:39.971811 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:06:39.974829 kubelet[1416]: I1213 14:06:39.974807 1416 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.69" Dec 13 14:06:39.977166 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:06:39.983956 kubelet[1416]: I1213 14:06:39.983924 1416 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:06:39.984161 kubelet[1416]: I1213 14:06:39.984138 1416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:06:39.985307 kubelet[1416]: E1213 14:06:39.985287 1416 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.69\" not found" Dec 13 14:06:39.986901 kubelet[1416]: E1213 14:06:39.986875 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.036034 kubelet[1416]: I1213 14:06:40.035994 1416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:06:40.036913 kubelet[1416]: I1213 14:06:40.036889 1416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:06:40.036954 kubelet[1416]: I1213 14:06:40.036922 1416 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:06:40.036954 kubelet[1416]: I1213 14:06:40.036939 1416 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:06:40.037015 kubelet[1416]: E1213 14:06:40.036994 1416 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:06:40.087463 kubelet[1416]: E1213 14:06:40.087374 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.189040 kubelet[1416]: E1213 14:06:40.188997 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.289570 kubelet[1416]: E1213 14:06:40.289533 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.390244 kubelet[1416]: E1213 14:06:40.390136 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.445967 sudo[1307]: pam_unix(sudo:session): session closed for user root Dec 13 14:06:40.447684 sshd[1304]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:40.450216 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:06:40.450767 systemd-logind[1200]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:06:40.450886 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:60556.service: Deactivated successfully. Dec 13 14:06:40.451799 systemd-logind[1200]: Removed session 5. Dec 13 14:06:40.490661 kubelet[1416]: E1213 14:06:40.490626 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.591081 kubelet[1416]: E1213 14:06:40.591048 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.691639 kubelet[1416]: E1213 14:06:40.691560 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.792046 kubelet[1416]: E1213 14:06:40.792004 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.815193 kubelet[1416]: I1213 14:06:40.815160 1416 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:06:40.815388 kubelet[1416]: W1213 14:06:40.815362 1416 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:06:40.857444 kubelet[1416]: E1213 14:06:40.857405 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:40.893074 kubelet[1416]: E1213 14:06:40.893030 1416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Dec 13 14:06:40.994557 kubelet[1416]: I1213 14:06:40.994476 1416 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:06:40.994923 env[1209]: time="2024-12-13T14:06:40.994870294Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:06:40.995141 kubelet[1416]: I1213 14:06:40.995082 1416 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:06:41.858221 kubelet[1416]: E1213 14:06:41.858176 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:41.858221 kubelet[1416]: I1213 14:06:41.858184 1416 apiserver.go:52] "Watching apiserver" Dec 13 14:06:41.861201 kubelet[1416]: I1213 14:06:41.861160 1416 topology_manager.go:215] "Topology Admit Handler" podUID="3cb2e938-3c39-4351-af74-19f67ce0d005" podNamespace="kube-system" podName="cilium-qwm4f" Dec 13 14:06:41.861288 kubelet[1416]: I1213 14:06:41.861275 1416 topology_manager.go:215] "Topology Admit Handler" podUID="c5313f2a-465f-4395-ab9f-d6ebc142af91" podNamespace="kube-system" podName="kube-proxy-mc2kq" Dec 13 14:06:41.865956 systemd[1]: Created slice kubepods-burstable-pod3cb2e938_3c39_4351_af74_19f67ce0d005.slice. Dec 13 14:06:41.868943 kubelet[1416]: I1213 14:06:41.868913 1416 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:06:41.877621 kubelet[1416]: I1213 14:06:41.877587 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-etc-cni-netd\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877621 kubelet[1416]: I1213 14:06:41.877626 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3cb2e938-3c39-4351-af74-19f67ce0d005-hubble-tls\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877761 kubelet[1416]: I1213 14:06:41.877648 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5313f2a-465f-4395-ab9f-d6ebc142af91-kube-proxy\") pod \"kube-proxy-mc2kq\" (UID: \"c5313f2a-465f-4395-ab9f-d6ebc142af91\") " pod="kube-system/kube-proxy-mc2kq" Dec 13 14:06:41.877761 kubelet[1416]: I1213 14:06:41.877668 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-hostproc\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877761 kubelet[1416]: I1213 14:06:41.877687 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-cgroup\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877761 kubelet[1416]: I1213 14:06:41.877707 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558mb\" (UniqueName: \"kubernetes.io/projected/3cb2e938-3c39-4351-af74-19f67ce0d005-kube-api-access-558mb\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877761 kubelet[1416]: I1213 14:06:41.877729 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5313f2a-465f-4395-ab9f-d6ebc142af91-xtables-lock\") pod \"kube-proxy-mc2kq\" (UID: \"c5313f2a-465f-4395-ab9f-d6ebc142af91\") " pod="kube-system/kube-proxy-mc2kq" Dec 13 14:06:41.877761 kubelet[1416]: I1213 14:06:41.877758 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-run\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877911 kubelet[1416]: I1213 14:06:41.877780 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-bpf-maps\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877911 kubelet[1416]: I1213 14:06:41.877798 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-xtables-lock\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877911 kubelet[1416]: I1213 14:06:41.877817 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3cb2e938-3c39-4351-af74-19f67ce0d005-clustermesh-secrets\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877911 kubelet[1416]: I1213 14:06:41.877836 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmk7k\" (UniqueName: \"kubernetes.io/projected/c5313f2a-465f-4395-ab9f-d6ebc142af91-kube-api-access-dmk7k\") pod \"kube-proxy-mc2kq\" (UID: \"c5313f2a-465f-4395-ab9f-d6ebc142af91\") " pod="kube-system/kube-proxy-mc2kq" Dec 13 14:06:41.877911 kubelet[1416]: I1213 14:06:41.877853 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cni-path\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.877911 kubelet[1416]: I1213 14:06:41.877874 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-lib-modules\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.878029 kubelet[1416]: I1213 14:06:41.877892 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-config-path\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.878029 kubelet[1416]: I1213 14:06:41.877910 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-host-proc-sys-net\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.878029 kubelet[1416]: I1213 14:06:41.877929 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-host-proc-sys-kernel\") pod \"cilium-qwm4f\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " pod="kube-system/cilium-qwm4f" Dec 13 14:06:41.878029 kubelet[1416]: I1213 14:06:41.877951 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5313f2a-465f-4395-ab9f-d6ebc142af91-lib-modules\") pod \"kube-proxy-mc2kq\" (UID: \"c5313f2a-465f-4395-ab9f-d6ebc142af91\") " pod="kube-system/kube-proxy-mc2kq" Dec 13 14:06:41.882960 systemd[1]: Created slice kubepods-besteffort-podc5313f2a_465f_4395_ab9f_d6ebc142af91.slice. Dec 13 14:06:42.183068 kubelet[1416]: E1213 14:06:42.182958 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:42.184263 env[1209]: time="2024-12-13T14:06:42.184223947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwm4f,Uid:3cb2e938-3c39-4351-af74-19f67ce0d005,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:42.192171 kubelet[1416]: E1213 14:06:42.192138 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:42.192599 env[1209]: time="2024-12-13T14:06:42.192559451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mc2kq,Uid:c5313f2a-465f-4395-ab9f-d6ebc142af91,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:42.807697 env[1209]: time="2024-12-13T14:06:42.807644957Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:42.810772 env[1209]: time="2024-12-13T14:06:42.810709250Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:42.813370 env[1209]: time="2024-12-13T14:06:42.813338545Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:42.814727 env[1209]: time="2024-12-13T14:06:42.814701456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:42.817536 env[1209]: time="2024-12-13T14:06:42.817499908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:42.819295 env[1209]: time="2024-12-13T14:06:42.819260077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:42.820616 env[1209]: time="2024-12-13T14:06:42.820588096Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:42.821695 env[1209]: time="2024-12-13T14:06:42.821668194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:42.845332 env[1209]: time="2024-12-13T14:06:42.845174442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:42.845332 env[1209]: time="2024-12-13T14:06:42.845265192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:42.845332 env[1209]: time="2024-12-13T14:06:42.845291618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:42.845594 env[1209]: time="2024-12-13T14:06:42.845547094Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/794622db2d272ebe4d5475ebf39b892acffb6f0e0188844e3f41e11d20b24faa pid=1479 runtime=io.containerd.runc.v2 Dec 13 14:06:42.846677 env[1209]: time="2024-12-13T14:06:42.846611606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:42.846677 env[1209]: time="2024-12-13T14:06:42.846646893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:42.846677 env[1209]: time="2024-12-13T14:06:42.846657337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:42.846869 env[1209]: time="2024-12-13T14:06:42.846764860Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858 pid=1478 runtime=io.containerd.runc.v2 Dec 13 14:06:42.858807 kubelet[1416]: E1213 14:06:42.858771 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:42.864508 systemd[1]: Started cri-containerd-794622db2d272ebe4d5475ebf39b892acffb6f0e0188844e3f41e11d20b24faa.scope. Dec 13 14:06:42.865571 systemd[1]: Started cri-containerd-d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858.scope. Dec 13 14:06:42.903223 env[1209]: time="2024-12-13T14:06:42.903174879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwm4f,Uid:3cb2e938-3c39-4351-af74-19f67ce0d005,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\"" Dec 13 14:06:42.907251 kubelet[1416]: E1213 14:06:42.907015 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:42.907739 env[1209]: time="2024-12-13T14:06:42.907660592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mc2kq,Uid:c5313f2a-465f-4395-ab9f-d6ebc142af91,Namespace:kube-system,Attempt:0,} returns sandbox id \"794622db2d272ebe4d5475ebf39b892acffb6f0e0188844e3f41e11d20b24faa\"" Dec 13 14:06:42.908343 kubelet[1416]: E1213 14:06:42.908191 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:42.910058 env[1209]: time="2024-12-13T14:06:42.910012138Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:06:42.985094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116487448.mount: Deactivated successfully. Dec 13 14:06:43.859732 kubelet[1416]: E1213 14:06:43.859706 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:44.860346 kubelet[1416]: E1213 14:06:44.860321 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:45.861206 kubelet[1416]: E1213 14:06:45.861165 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:46.862051 kubelet[1416]: E1213 14:06:46.862012 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:47.862484 kubelet[1416]: E1213 14:06:47.862438 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:48.857867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145986885.mount: Deactivated successfully. Dec 13 14:06:48.863575 kubelet[1416]: E1213 14:06:48.863530 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:49.863822 kubelet[1416]: E1213 14:06:49.863785 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:50.864760 kubelet[1416]: E1213 14:06:50.864722 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:51.095693 env[1209]: time="2024-12-13T14:06:51.095635414Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:51.096818 env[1209]: time="2024-12-13T14:06:51.096792891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:51.098764 env[1209]: time="2024-12-13T14:06:51.098729022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:51.099418 env[1209]: time="2024-12-13T14:06:51.099386758Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:06:51.100104 env[1209]: time="2024-12-13T14:06:51.100079019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:06:51.101825 env[1209]: time="2024-12-13T14:06:51.101793845Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:06:51.112049 env[1209]: time="2024-12-13T14:06:51.112007490Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\"" Dec 13 14:06:51.112866 env[1209]: time="2024-12-13T14:06:51.112822822Z" level=info msg="StartContainer for \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\"" Dec 13 14:06:51.127776 systemd[1]: Started cri-containerd-b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d.scope. Dec 13 14:06:51.165460 env[1209]: time="2024-12-13T14:06:51.164844920Z" level=info msg="StartContainer for \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\" returns successfully" Dec 13 14:06:51.189562 systemd[1]: cri-containerd-b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d.scope: Deactivated successfully. Dec 13 14:06:51.335152 env[1209]: time="2024-12-13T14:06:51.335107830Z" level=info msg="shim disconnected" id=b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d Dec 13 14:06:51.335378 env[1209]: time="2024-12-13T14:06:51.335358757Z" level=warning msg="cleaning up after shim disconnected" id=b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d namespace=k8s.io Dec 13 14:06:51.335443 env[1209]: time="2024-12-13T14:06:51.335430558Z" level=info msg="cleaning up dead shim" Dec 13 14:06:51.343729 env[1209]: time="2024-12-13T14:06:51.343696232Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:06:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1595 runtime=io.containerd.runc.v2\n" Dec 13 14:06:51.865739 kubelet[1416]: E1213 14:06:51.865704 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:52.055298 kubelet[1416]: E1213 14:06:52.055268 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:52.058807 env[1209]: time="2024-12-13T14:06:52.058751572Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:06:52.073036 env[1209]: time="2024-12-13T14:06:52.072974704Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\"" Dec 13 14:06:52.073489 env[1209]: time="2024-12-13T14:06:52.073454351Z" level=info msg="StartContainer for \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\"" Dec 13 14:06:52.087706 systemd[1]: Started cri-containerd-20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28.scope. Dec 13 14:06:52.108863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d-rootfs.mount: Deactivated successfully. Dec 13 14:06:52.133113 env[1209]: time="2024-12-13T14:06:52.132697691Z" level=info msg="StartContainer for \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\" returns successfully" Dec 13 14:06:52.140064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:06:52.140299 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:06:52.141026 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:06:52.142484 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:06:52.144148 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:06:52.146897 systemd[1]: cri-containerd-20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28.scope: Deactivated successfully. Dec 13 14:06:52.150692 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:06:52.164391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28-rootfs.mount: Deactivated successfully. Dec 13 14:06:52.174941 env[1209]: time="2024-12-13T14:06:52.174899086Z" level=info msg="shim disconnected" id=20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28 Dec 13 14:06:52.175144 env[1209]: time="2024-12-13T14:06:52.175126705Z" level=warning msg="cleaning up after shim disconnected" id=20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28 namespace=k8s.io Dec 13 14:06:52.175231 env[1209]: time="2024-12-13T14:06:52.175191517Z" level=info msg="cleaning up dead shim" Dec 13 14:06:52.181623 env[1209]: time="2024-12-13T14:06:52.181594122Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:06:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1661 runtime=io.containerd.runc.v2\n" Dec 13 14:06:52.866154 kubelet[1416]: E1213 14:06:52.866124 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:52.870681 env[1209]: time="2024-12-13T14:06:52.870643082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:52.871893 env[1209]: time="2024-12-13T14:06:52.871867885Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:52.873803 env[1209]: time="2024-12-13T14:06:52.873775465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:52.875335 env[1209]: time="2024-12-13T14:06:52.875306100Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:52.875657 env[1209]: time="2024-12-13T14:06:52.875631435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:06:52.877491 env[1209]: time="2024-12-13T14:06:52.877463155Z" level=info msg="CreateContainer within sandbox \"794622db2d272ebe4d5475ebf39b892acffb6f0e0188844e3f41e11d20b24faa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:06:52.889669 env[1209]: time="2024-12-13T14:06:52.889626469Z" level=info msg="CreateContainer within sandbox \"794622db2d272ebe4d5475ebf39b892acffb6f0e0188844e3f41e11d20b24faa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ccd18418db26264313c9d2b60b81915e6bb2df8dd776f38fb81f23fcf9705f88\"" Dec 13 14:06:52.890101 env[1209]: time="2024-12-13T14:06:52.890053668Z" level=info msg="StartContainer for \"ccd18418db26264313c9d2b60b81915e6bb2df8dd776f38fb81f23fcf9705f88\"" Dec 13 14:06:52.903756 systemd[1]: Started cri-containerd-ccd18418db26264313c9d2b60b81915e6bb2df8dd776f38fb81f23fcf9705f88.scope. Dec 13 14:06:52.942854 env[1209]: time="2024-12-13T14:06:52.942813678Z" level=info msg="StartContainer for \"ccd18418db26264313c9d2b60b81915e6bb2df8dd776f38fb81f23fcf9705f88\" returns successfully" Dec 13 14:06:53.062639 kubelet[1416]: E1213 14:06:53.062608 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:53.063763 kubelet[1416]: E1213 14:06:53.063740 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:53.065436 env[1209]: time="2024-12-13T14:06:53.065397740Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:06:53.076193 env[1209]: time="2024-12-13T14:06:53.076152739Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\"" Dec 13 14:06:53.076596 env[1209]: time="2024-12-13T14:06:53.076571475Z" level=info msg="StartContainer for \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\"" Dec 13 14:06:53.084025 kubelet[1416]: I1213 14:06:53.083996 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mc2kq" podStartSLOduration=4.116597127 podStartE2EDuration="14.083944825s" podCreationTimestamp="2024-12-13 14:06:39 +0000 UTC" firstStartedPulling="2024-12-13 14:06:42.908935165 +0000 UTC m=+3.982128294" lastFinishedPulling="2024-12-13 14:06:52.876282903 +0000 UTC m=+13.949475992" observedRunningTime="2024-12-13 14:06:53.083565149 +0000 UTC m=+14.156758278" watchObservedRunningTime="2024-12-13 14:06:53.083944825 +0000 UTC m=+14.157137954" Dec 13 14:06:53.091570 systemd[1]: Started cri-containerd-1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963.scope. Dec 13 14:06:53.109505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4166964217.mount: Deactivated successfully. Dec 13 14:06:53.136248 env[1209]: time="2024-12-13T14:06:53.134792295Z" level=info msg="StartContainer for \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\" returns successfully" Dec 13 14:06:53.146136 systemd[1]: cri-containerd-1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963.scope: Deactivated successfully. Dec 13 14:06:53.162854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963-rootfs.mount: Deactivated successfully. Dec 13 14:06:53.271705 env[1209]: time="2024-12-13T14:06:53.271644924Z" level=info msg="shim disconnected" id=1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963 Dec 13 14:06:53.271705 env[1209]: time="2024-12-13T14:06:53.271695835Z" level=warning msg="cleaning up after shim disconnected" id=1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963 namespace=k8s.io Dec 13 14:06:53.271705 env[1209]: time="2024-12-13T14:06:53.271706687Z" level=info msg="cleaning up dead shim" Dec 13 14:06:53.278149 env[1209]: time="2024-12-13T14:06:53.278103198Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:06:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1828 runtime=io.containerd.runc.v2\n" Dec 13 14:06:53.866545 kubelet[1416]: E1213 14:06:53.866503 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:54.066950 kubelet[1416]: E1213 14:06:54.066921 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:54.067392 kubelet[1416]: E1213 14:06:54.067228 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:54.069025 env[1209]: time="2024-12-13T14:06:54.068978352Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:06:54.081360 env[1209]: time="2024-12-13T14:06:54.081316296Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\"" Dec 13 14:06:54.081929 env[1209]: time="2024-12-13T14:06:54.081858292Z" level=info msg="StartContainer for \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\"" Dec 13 14:06:54.094674 systemd[1]: Started cri-containerd-52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96.scope. Dec 13 14:06:54.126034 systemd[1]: cri-containerd-52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96.scope: Deactivated successfully. Dec 13 14:06:54.128149 env[1209]: time="2024-12-13T14:06:54.128104568Z" level=info msg="StartContainer for \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\" returns successfully" Dec 13 14:06:54.141900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96-rootfs.mount: Deactivated successfully. Dec 13 14:06:54.145390 env[1209]: time="2024-12-13T14:06:54.145351285Z" level=info msg="shim disconnected" id=52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96 Dec 13 14:06:54.145690 env[1209]: time="2024-12-13T14:06:54.145667941Z" level=warning msg="cleaning up after shim disconnected" id=52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96 namespace=k8s.io Dec 13 14:06:54.145756 env[1209]: time="2024-12-13T14:06:54.145743373Z" level=info msg="cleaning up dead shim" Dec 13 14:06:54.151693 env[1209]: time="2024-12-13T14:06:54.151658869Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:06:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1933 runtime=io.containerd.runc.v2\n" Dec 13 14:06:54.866976 kubelet[1416]: E1213 14:06:54.866933 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:55.070120 kubelet[1416]: E1213 14:06:55.070095 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:55.072482 env[1209]: time="2024-12-13T14:06:55.072382139Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:06:55.083588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514617445.mount: Deactivated successfully. Dec 13 14:06:55.086775 env[1209]: time="2024-12-13T14:06:55.086731128Z" level=info msg="CreateContainer within sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\"" Dec 13 14:06:55.087182 env[1209]: time="2024-12-13T14:06:55.087162090Z" level=info msg="StartContainer for \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\"" Dec 13 14:06:55.102670 systemd[1]: Started cri-containerd-2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a.scope. Dec 13 14:06:55.136386 env[1209]: time="2024-12-13T14:06:55.136265324Z" level=info msg="StartContainer for \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\" returns successfully" Dec 13 14:06:55.299833 kubelet[1416]: I1213 14:06:55.299218 1416 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:06:55.399239 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:06:55.670218 kernel: Initializing XFRM netlink socket Dec 13 14:06:55.670313 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:06:55.867156 kubelet[1416]: E1213 14:06:55.867094 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:56.074424 kubelet[1416]: E1213 14:06:56.074118 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:56.086680 kubelet[1416]: I1213 14:06:56.086648 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qwm4f" podStartSLOduration=8.895535165 podStartE2EDuration="17.086608812s" podCreationTimestamp="2024-12-13 14:06:39 +0000 UTC" firstStartedPulling="2024-12-13 14:06:42.908786539 +0000 UTC m=+3.981979668" lastFinishedPulling="2024-12-13 14:06:51.099860186 +0000 UTC m=+12.173053315" observedRunningTime="2024-12-13 14:06:56.086449443 +0000 UTC m=+17.159642572" watchObservedRunningTime="2024-12-13 14:06:56.086608812 +0000 UTC m=+17.159801941" Dec 13 14:06:56.224826 kubelet[1416]: I1213 14:06:56.224783 1416 topology_manager.go:215] "Topology Admit Handler" podUID="d93e0ade-be78-4658-8a4a-9620bb50a8c4" podNamespace="default" podName="nginx-deployment-6d5f899847-xj6xx" Dec 13 14:06:56.229429 systemd[1]: Created slice kubepods-besteffort-podd93e0ade_be78_4658_8a4a_9620bb50a8c4.slice. Dec 13 14:06:56.255072 kubelet[1416]: I1213 14:06:56.255040 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5zj\" (UniqueName: \"kubernetes.io/projected/d93e0ade-be78-4658-8a4a-9620bb50a8c4-kube-api-access-jl5zj\") pod \"nginx-deployment-6d5f899847-xj6xx\" (UID: \"d93e0ade-be78-4658-8a4a-9620bb50a8c4\") " pod="default/nginx-deployment-6d5f899847-xj6xx" Dec 13 14:06:56.532598 env[1209]: time="2024-12-13T14:06:56.532547378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xj6xx,Uid:d93e0ade-be78-4658-8a4a-9620bb50a8c4,Namespace:default,Attempt:0,}" Dec 13 14:06:56.868097 kubelet[1416]: E1213 14:06:56.868052 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:57.075625 kubelet[1416]: E1213 14:06:57.075603 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:57.271630 systemd-networkd[1040]: cilium_host: Link UP Dec 13 14:06:57.273973 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:06:57.274033 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:06:57.271728 systemd-networkd[1040]: cilium_net: Link UP Dec 13 14:06:57.272379 systemd-networkd[1040]: cilium_net: Gained carrier Dec 13 14:06:57.273224 systemd-networkd[1040]: cilium_host: Gained carrier Dec 13 14:06:57.342752 systemd-networkd[1040]: cilium_net: Gained IPv6LL Dec 13 14:06:57.346798 systemd-networkd[1040]: cilium_vxlan: Link UP Dec 13 14:06:57.346803 systemd-networkd[1040]: cilium_vxlan: Gained carrier Dec 13 14:06:57.651224 kernel: NET: Registered PF_ALG protocol family Dec 13 14:06:57.862376 systemd-networkd[1040]: cilium_host: Gained IPv6LL Dec 13 14:06:57.868877 kubelet[1416]: E1213 14:06:57.868846 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:58.076917 kubelet[1416]: E1213 14:06:58.076673 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:58.196823 systemd-networkd[1040]: lxc_health: Link UP Dec 13 14:06:58.216256 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:06:58.213126 systemd-networkd[1040]: lxc_health: Gained carrier Dec 13 14:06:58.573958 systemd-networkd[1040]: lxcc5ac8672f89f: Link UP Dec 13 14:06:58.581231 kernel: eth0: renamed from tmpc673f Dec 13 14:06:58.588609 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:06:58.588713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc5ac8672f89f: link becomes ready Dec 13 14:06:58.588687 systemd-networkd[1040]: lxcc5ac8672f89f: Gained carrier Dec 13 14:06:58.869923 kubelet[1416]: E1213 14:06:58.869878 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:59.014320 systemd-networkd[1040]: cilium_vxlan: Gained IPv6LL Dec 13 14:06:59.846355 systemd-networkd[1040]: lxc_health: Gained IPv6LL Dec 13 14:06:59.857717 kubelet[1416]: E1213 14:06:59.857683 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:06:59.871011 kubelet[1416]: E1213 14:06:59.870981 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:00.184299 kubelet[1416]: E1213 14:07:00.183970 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:00.422361 systemd-networkd[1040]: lxcc5ac8672f89f: Gained IPv6LL Dec 13 14:07:00.871413 kubelet[1416]: E1213 14:07:00.871380 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:01.082127 kubelet[1416]: E1213 14:07:01.082081 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:01.872857 kubelet[1416]: E1213 14:07:01.872802 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:01.994116 env[1209]: time="2024-12-13T14:07:01.994044834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:01.994423 env[1209]: time="2024-12-13T14:07:01.994122846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:01.994423 env[1209]: time="2024-12-13T14:07:01.994149982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:01.994423 env[1209]: time="2024-12-13T14:07:01.994304887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c673f96764d0907f484a2af40f6c08d99833686e06b09eb15a4562ed889e0788 pid=2480 runtime=io.containerd.runc.v2 Dec 13 14:07:02.005828 systemd[1]: Started cri-containerd-c673f96764d0907f484a2af40f6c08d99833686e06b09eb15a4562ed889e0788.scope. Dec 13 14:07:02.061871 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:07:02.077688 env[1209]: time="2024-12-13T14:07:02.077641159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xj6xx,Uid:d93e0ade-be78-4658-8a4a-9620bb50a8c4,Namespace:default,Attempt:0,} returns sandbox id \"c673f96764d0907f484a2af40f6c08d99833686e06b09eb15a4562ed889e0788\"" Dec 13 14:07:02.079029 env[1209]: time="2024-12-13T14:07:02.078995166Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:07:02.084189 kubelet[1416]: E1213 14:07:02.084154 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:02.873234 kubelet[1416]: E1213 14:07:02.873180 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:03.873574 kubelet[1416]: E1213 14:07:03.873519 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:04.325772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416782185.mount: Deactivated successfully. Dec 13 14:07:04.873954 kubelet[1416]: E1213 14:07:04.873904 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:05.500659 env[1209]: time="2024-12-13T14:07:05.500611057Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:05.501916 env[1209]: time="2024-12-13T14:07:05.501891490Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:05.503321 env[1209]: time="2024-12-13T14:07:05.503299057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:05.504806 env[1209]: time="2024-12-13T14:07:05.504781313Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:05.506235 env[1209]: time="2024-12-13T14:07:05.506209563Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:07:05.507652 env[1209]: time="2024-12-13T14:07:05.507606569Z" level=info msg="CreateContainer within sandbox \"c673f96764d0907f484a2af40f6c08d99833686e06b09eb15a4562ed889e0788\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:07:05.516818 env[1209]: time="2024-12-13T14:07:05.516778580Z" level=info msg="CreateContainer within sandbox \"c673f96764d0907f484a2af40f6c08d99833686e06b09eb15a4562ed889e0788\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1250287478aa1c58c7e86a39db3134b4ab846bcdc3badd25376625f43327785f\"" Dec 13 14:07:05.517220 env[1209]: time="2024-12-13T14:07:05.517181748Z" level=info msg="StartContainer for \"1250287478aa1c58c7e86a39db3134b4ab846bcdc3badd25376625f43327785f\"" Dec 13 14:07:05.533781 systemd[1]: Started cri-containerd-1250287478aa1c58c7e86a39db3134b4ab846bcdc3badd25376625f43327785f.scope. Dec 13 14:07:05.575836 env[1209]: time="2024-12-13T14:07:05.575785715Z" level=info msg="StartContainer for \"1250287478aa1c58c7e86a39db3134b4ab846bcdc3badd25376625f43327785f\" returns successfully" Dec 13 14:07:05.874616 kubelet[1416]: E1213 14:07:05.874576 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:06.097494 kubelet[1416]: I1213 14:07:06.097451 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-xj6xx" podStartSLOduration=6.669627456 podStartE2EDuration="10.097412719s" podCreationTimestamp="2024-12-13 14:06:56 +0000 UTC" firstStartedPulling="2024-12-13 14:07:02.078720695 +0000 UTC m=+23.151913784" lastFinishedPulling="2024-12-13 14:07:05.506505918 +0000 UTC m=+26.579699047" observedRunningTime="2024-12-13 14:07:06.097075561 +0000 UTC m=+27.170268690" watchObservedRunningTime="2024-12-13 14:07:06.097412719 +0000 UTC m=+27.170605848" Dec 13 14:07:06.875484 kubelet[1416]: E1213 14:07:06.875440 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:07.876186 kubelet[1416]: E1213 14:07:07.876148 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:08.184816 kubelet[1416]: I1213 14:07:08.184526 1416 topology_manager.go:215] "Topology Admit Handler" podUID="d83c8ed1-cec4-47b7-8f28-02027ccc1b1b" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:07:08.189112 systemd[1]: Created slice kubepods-besteffort-podd83c8ed1_cec4_47b7_8f28_02027ccc1b1b.slice. Dec 13 14:07:08.212004 kubelet[1416]: I1213 14:07:08.211961 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d83c8ed1-cec4-47b7-8f28-02027ccc1b1b-data\") pod \"nfs-server-provisioner-0\" (UID: \"d83c8ed1-cec4-47b7-8f28-02027ccc1b1b\") " pod="default/nfs-server-provisioner-0" Dec 13 14:07:08.212004 kubelet[1416]: I1213 14:07:08.212007 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wch\" (UniqueName: \"kubernetes.io/projected/d83c8ed1-cec4-47b7-8f28-02027ccc1b1b-kube-api-access-75wch\") pod \"nfs-server-provisioner-0\" (UID: \"d83c8ed1-cec4-47b7-8f28-02027ccc1b1b\") " pod="default/nfs-server-provisioner-0" Dec 13 14:07:08.492419 env[1209]: time="2024-12-13T14:07:08.492011718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d83c8ed1-cec4-47b7-8f28-02027ccc1b1b,Namespace:default,Attempt:0,}" Dec 13 14:07:08.517001 systemd-networkd[1040]: lxcacff7a9fbcbf: Link UP Dec 13 14:07:08.525221 kernel: eth0: renamed from tmp7b4be Dec 13 14:07:08.537247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:07:08.537342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcacff7a9fbcbf: link becomes ready Dec 13 14:07:08.537648 systemd-networkd[1040]: lxcacff7a9fbcbf: Gained carrier Dec 13 14:07:08.706813 env[1209]: time="2024-12-13T14:07:08.706737173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:08.706813 env[1209]: time="2024-12-13T14:07:08.706781458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:08.706813 env[1209]: time="2024-12-13T14:07:08.706795499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:08.707002 env[1209]: time="2024-12-13T14:07:08.706910911Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b4be68385babc3c431fa25779c8146263770ed699e0aaafdafc22f7cc7afce1 pid=2611 runtime=io.containerd.runc.v2 Dec 13 14:07:08.718632 systemd[1]: Started cri-containerd-7b4be68385babc3c431fa25779c8146263770ed699e0aaafdafc22f7cc7afce1.scope. Dec 13 14:07:08.742073 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:07:08.759113 env[1209]: time="2024-12-13T14:07:08.758430358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d83c8ed1-cec4-47b7-8f28-02027ccc1b1b,Namespace:default,Attempt:0,} returns sandbox id \"7b4be68385babc3c431fa25779c8146263770ed699e0aaafdafc22f7cc7afce1\"" Dec 13 14:07:08.760301 env[1209]: time="2024-12-13T14:07:08.760215577Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:07:08.876831 kubelet[1416]: E1213 14:07:08.876789 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:09.323814 systemd[1]: run-containerd-runc-k8s.io-7b4be68385babc3c431fa25779c8146263770ed699e0aaafdafc22f7cc7afce1-runc.FmF0Vq.mount: Deactivated successfully. Dec 13 14:07:09.877582 kubelet[1416]: E1213 14:07:09.877550 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:10.534462 systemd-networkd[1040]: lxcacff7a9fbcbf: Gained IPv6LL Dec 13 14:07:10.838914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2689207443.mount: Deactivated successfully. Dec 13 14:07:10.878735 kubelet[1416]: E1213 14:07:10.878682 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:11.879464 kubelet[1416]: E1213 14:07:11.879420 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:12.575708 env[1209]: time="2024-12-13T14:07:12.575664937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:12.578558 env[1209]: time="2024-12-13T14:07:12.578526728Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:12.580445 env[1209]: time="2024-12-13T14:07:12.580406719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:12.582388 env[1209]: time="2024-12-13T14:07:12.582361716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:12.583166 env[1209]: time="2024-12-13T14:07:12.583139459Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 14:07:12.585548 env[1209]: time="2024-12-13T14:07:12.585518850Z" level=info msg="CreateContainer within sandbox \"7b4be68385babc3c431fa25779c8146263770ed699e0aaafdafc22f7cc7afce1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:07:12.595242 env[1209]: time="2024-12-13T14:07:12.595208990Z" level=info msg="CreateContainer within sandbox \"7b4be68385babc3c431fa25779c8146263770ed699e0aaafdafc22f7cc7afce1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9add2c761fbc77a364096a08cab72365213fbc43f9290e6ee59e184e5063a28b\"" Dec 13 14:07:12.595660 env[1209]: time="2024-12-13T14:07:12.595610582Z" level=info msg="StartContainer for \"9add2c761fbc77a364096a08cab72365213fbc43f9290e6ee59e184e5063a28b\"" Dec 13 14:07:12.615462 systemd[1]: Started cri-containerd-9add2c761fbc77a364096a08cab72365213fbc43f9290e6ee59e184e5063a28b.scope. Dec 13 14:07:12.658345 env[1209]: time="2024-12-13T14:07:12.658281186Z" level=info msg="StartContainer for \"9add2c761fbc77a364096a08cab72365213fbc43f9290e6ee59e184e5063a28b\" returns successfully" Dec 13 14:07:12.880582 kubelet[1416]: E1213 14:07:12.880537 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:13.116048 kubelet[1416]: I1213 14:07:13.115821 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.292262829 podStartE2EDuration="5.115786364s" podCreationTimestamp="2024-12-13 14:07:08 +0000 UTC" firstStartedPulling="2024-12-13 14:07:08.760003195 +0000 UTC m=+29.833196324" lastFinishedPulling="2024-12-13 14:07:12.58352673 +0000 UTC m=+33.656719859" observedRunningTime="2024-12-13 14:07:13.115291406 +0000 UTC m=+34.188484535" watchObservedRunningTime="2024-12-13 14:07:13.115786364 +0000 UTC m=+34.188979493" Dec 13 14:07:13.881090 kubelet[1416]: E1213 14:07:13.881060 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:14.882182 kubelet[1416]: E1213 14:07:14.882129 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:15.882787 kubelet[1416]: E1213 14:07:15.882748 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:16.883094 kubelet[1416]: E1213 14:07:16.883062 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:17.883367 kubelet[1416]: E1213 14:07:17.883335 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:18.005337 update_engine[1202]: I1213 14:07:18.005292 1202 update_attempter.cc:509] Updating boot flags... Dec 13 14:07:18.884064 kubelet[1416]: E1213 14:07:18.884012 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:19.857033 kubelet[1416]: E1213 14:07:19.856995 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:19.884303 kubelet[1416]: E1213 14:07:19.884276 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:20.884754 kubelet[1416]: E1213 14:07:20.884715 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:21.885057 kubelet[1416]: E1213 14:07:21.885000 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:22.798484 kubelet[1416]: I1213 14:07:22.798445 1416 topology_manager.go:215] "Topology Admit Handler" podUID="40092015-3ea3-420d-8a58-bb92ac8d4684" podNamespace="default" podName="test-pod-1" Dec 13 14:07:22.803178 systemd[1]: Created slice kubepods-besteffort-pod40092015_3ea3_420d_8a58_bb92ac8d4684.slice. Dec 13 14:07:22.884545 kubelet[1416]: I1213 14:07:22.884513 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c068f69e-6ad6-4467-8daa-b5dd6dd53770\" (UniqueName: \"kubernetes.io/nfs/40092015-3ea3-420d-8a58-bb92ac8d4684-pvc-c068f69e-6ad6-4467-8daa-b5dd6dd53770\") pod \"test-pod-1\" (UID: \"40092015-3ea3-420d-8a58-bb92ac8d4684\") " pod="default/test-pod-1" Dec 13 14:07:22.884745 kubelet[1416]: I1213 14:07:22.884731 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vk2g\" (UniqueName: \"kubernetes.io/projected/40092015-3ea3-420d-8a58-bb92ac8d4684-kube-api-access-9vk2g\") pod \"test-pod-1\" (UID: \"40092015-3ea3-420d-8a58-bb92ac8d4684\") " pod="default/test-pod-1" Dec 13 14:07:22.885503 kubelet[1416]: E1213 14:07:22.885485 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:23.005242 kernel: FS-Cache: Loaded Dec 13 14:07:23.033661 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:07:23.033729 kernel: RPC: Registered udp transport module. Dec 13 14:07:23.033761 kernel: RPC: Registered tcp transport module. Dec 13 14:07:23.035246 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:07:23.078224 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:07:23.205481 kernel: NFS: Registering the id_resolver key type Dec 13 14:07:23.205599 kernel: Key type id_resolver registered Dec 13 14:07:23.205622 kernel: Key type id_legacy registered Dec 13 14:07:23.230482 nfsidmap[2746]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:07:23.235492 nfsidmap[2749]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:07:23.406014 env[1209]: time="2024-12-13T14:07:23.405968578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:40092015-3ea3-420d-8a58-bb92ac8d4684,Namespace:default,Attempt:0,}" Dec 13 14:07:23.428134 systemd-networkd[1040]: lxc539d8072cc25: Link UP Dec 13 14:07:23.438279 kernel: eth0: renamed from tmp7e7fd Dec 13 14:07:23.448256 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:07:23.448345 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc539d8072cc25: link becomes ready Dec 13 14:07:23.448468 systemd-networkd[1040]: lxc539d8072cc25: Gained carrier Dec 13 14:07:23.604300 env[1209]: time="2024-12-13T14:07:23.604218395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:23.604300 env[1209]: time="2024-12-13T14:07:23.604265917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:23.604300 env[1209]: time="2024-12-13T14:07:23.604276878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:23.604807 env[1209]: time="2024-12-13T14:07:23.604761100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e7fd5348725744a86e6452fee94200cd7444b4c2e9a02115b80677e93d0b23b pid=2785 runtime=io.containerd.runc.v2 Dec 13 14:07:23.614537 systemd[1]: Started cri-containerd-7e7fd5348725744a86e6452fee94200cd7444b4c2e9a02115b80677e93d0b23b.scope. Dec 13 14:07:23.661640 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:07:23.677184 env[1209]: time="2024-12-13T14:07:23.677140370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:40092015-3ea3-420d-8a58-bb92ac8d4684,Namespace:default,Attempt:0,} returns sandbox id \"7e7fd5348725744a86e6452fee94200cd7444b4c2e9a02115b80677e93d0b23b\"" Dec 13 14:07:23.678893 env[1209]: time="2024-12-13T14:07:23.678863170Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:07:23.886002 kubelet[1416]: E1213 14:07:23.885563 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:23.910544 env[1209]: time="2024-12-13T14:07:23.910498212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:23.911694 env[1209]: time="2024-12-13T14:07:23.911655385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:23.913340 env[1209]: time="2024-12-13T14:07:23.913314302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:23.915273 env[1209]: time="2024-12-13T14:07:23.915242071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:23.915892 env[1209]: time="2024-12-13T14:07:23.915851019Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:07:23.918087 env[1209]: time="2024-12-13T14:07:23.918050601Z" level=info msg="CreateContainer within sandbox \"7e7fd5348725744a86e6452fee94200cd7444b4c2e9a02115b80677e93d0b23b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:07:23.928842 env[1209]: time="2024-12-13T14:07:23.928807979Z" level=info msg="CreateContainer within sandbox \"7e7fd5348725744a86e6452fee94200cd7444b4c2e9a02115b80677e93d0b23b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ccd7149c2ae7c04ce07fabf1a967a67521a23d5bc396ec5620f4d0a049ba7313\"" Dec 13 14:07:23.929419 env[1209]: time="2024-12-13T14:07:23.929394046Z" level=info msg="StartContainer for \"ccd7149c2ae7c04ce07fabf1a967a67521a23d5bc396ec5620f4d0a049ba7313\"" Dec 13 14:07:23.942540 systemd[1]: Started cri-containerd-ccd7149c2ae7c04ce07fabf1a967a67521a23d5bc396ec5620f4d0a049ba7313.scope. Dec 13 14:07:23.984640 env[1209]: time="2024-12-13T14:07:23.984580721Z" level=info msg="StartContainer for \"ccd7149c2ae7c04ce07fabf1a967a67521a23d5bc396ec5620f4d0a049ba7313\" returns successfully" Dec 13 14:07:24.132595 kubelet[1416]: I1213 14:07:24.132251 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.893735004 podStartE2EDuration="16.132189682s" podCreationTimestamp="2024-12-13 14:07:08 +0000 UTC" firstStartedPulling="2024-12-13 14:07:23.678284783 +0000 UTC m=+44.751477912" lastFinishedPulling="2024-12-13 14:07:23.916739501 +0000 UTC m=+44.989932590" observedRunningTime="2024-12-13 14:07:24.132080678 +0000 UTC m=+45.205273807" watchObservedRunningTime="2024-12-13 14:07:24.132189682 +0000 UTC m=+45.205382771" Dec 13 14:07:24.886148 kubelet[1416]: E1213 14:07:24.886105 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:25.318333 systemd-networkd[1040]: lxc539d8072cc25: Gained IPv6LL Dec 13 14:07:25.887041 kubelet[1416]: E1213 14:07:25.886991 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:26.887987 kubelet[1416]: E1213 14:07:26.887943 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:27.888377 kubelet[1416]: E1213 14:07:27.888340 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:28.888758 kubelet[1416]: E1213 14:07:28.888713 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:29.889602 kubelet[1416]: E1213 14:07:29.889564 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:30.890665 kubelet[1416]: E1213 14:07:30.890625 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:31.603608 env[1209]: time="2024-12-13T14:07:31.603550249Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:07:31.608827 env[1209]: time="2024-12-13T14:07:31.608792742Z" level=info msg="StopContainer for \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\" with timeout 2 (s)" Dec 13 14:07:31.609020 env[1209]: time="2024-12-13T14:07:31.608995789Z" level=info msg="Stop container \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\" with signal terminated" Dec 13 14:07:31.613926 systemd-networkd[1040]: lxc_health: Link DOWN Dec 13 14:07:31.613932 systemd-networkd[1040]: lxc_health: Lost carrier Dec 13 14:07:31.647541 systemd[1]: cri-containerd-2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a.scope: Deactivated successfully. Dec 13 14:07:31.647852 systemd[1]: cri-containerd-2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a.scope: Consumed 6.280s CPU time. Dec 13 14:07:31.662434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a-rootfs.mount: Deactivated successfully. Dec 13 14:07:31.707405 env[1209]: time="2024-12-13T14:07:31.707346949Z" level=info msg="shim disconnected" id=2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a Dec 13 14:07:31.707405 env[1209]: time="2024-12-13T14:07:31.707394191Z" level=warning msg="cleaning up after shim disconnected" id=2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a namespace=k8s.io Dec 13 14:07:31.707405 env[1209]: time="2024-12-13T14:07:31.707406511Z" level=info msg="cleaning up dead shim" Dec 13 14:07:31.714174 env[1209]: time="2024-12-13T14:07:31.714130613Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2918 runtime=io.containerd.runc.v2\n" Dec 13 14:07:31.716682 env[1209]: time="2024-12-13T14:07:31.716635535Z" level=info msg="StopContainer for \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\" returns successfully" Dec 13 14:07:31.717266 env[1209]: time="2024-12-13T14:07:31.717226275Z" level=info msg="StopPodSandbox for \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\"" Dec 13 14:07:31.717407 env[1209]: time="2024-12-13T14:07:31.717385240Z" level=info msg="Container to stop \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.717479 env[1209]: time="2024-12-13T14:07:31.717461482Z" level=info msg="Container to stop \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.717539 env[1209]: time="2024-12-13T14:07:31.717523324Z" level=info msg="Container to stop \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.717607 env[1209]: time="2024-12-13T14:07:31.717590407Z" level=info msg="Container to stop \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.717670 env[1209]: time="2024-12-13T14:07:31.717653609Z" level=info msg="Container to stop \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.720541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858-shm.mount: Deactivated successfully. Dec 13 14:07:31.724552 systemd[1]: cri-containerd-d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858.scope: Deactivated successfully. Dec 13 14:07:31.744924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858-rootfs.mount: Deactivated successfully. Dec 13 14:07:31.748143 env[1209]: time="2024-12-13T14:07:31.748092292Z" level=info msg="shim disconnected" id=d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858 Dec 13 14:07:31.748297 env[1209]: time="2024-12-13T14:07:31.748143453Z" level=warning msg="cleaning up after shim disconnected" id=d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858 namespace=k8s.io Dec 13 14:07:31.748297 env[1209]: time="2024-12-13T14:07:31.748154974Z" level=info msg="cleaning up dead shim" Dec 13 14:07:31.754518 env[1209]: time="2024-12-13T14:07:31.754484542Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2949 runtime=io.containerd.runc.v2\n" Dec 13 14:07:31.754791 env[1209]: time="2024-12-13T14:07:31.754769232Z" level=info msg="TearDown network for sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" successfully" Dec 13 14:07:31.754833 env[1209]: time="2024-12-13T14:07:31.754793632Z" level=info msg="StopPodSandbox for \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" returns successfully" Dec 13 14:07:31.830760 kubelet[1416]: I1213 14:07:31.830618 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3cb2e938-3c39-4351-af74-19f67ce0d005-hubble-tls\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.830760 kubelet[1416]: I1213 14:07:31.830760 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-lib-modules\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.830939 kubelet[1416]: I1213 14:07:31.830781 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-host-proc-sys-net\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.830939 kubelet[1416]: I1213 14:07:31.830812 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-host-proc-sys-kernel\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.830939 kubelet[1416]: I1213 14:07:31.830830 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-etc-cni-netd\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.830939 kubelet[1416]: I1213 14:07:31.830847 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-cgroup\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.830939 kubelet[1416]: I1213 14:07:31.830873 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558mb\" (UniqueName: \"kubernetes.io/projected/3cb2e938-3c39-4351-af74-19f67ce0d005-kube-api-access-558mb\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.830939 kubelet[1416]: I1213 14:07:31.830892 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cni-path\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.831100 kubelet[1416]: I1213 14:07:31.830912 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-hostproc\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.831100 kubelet[1416]: I1213 14:07:31.830929 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-bpf-maps\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.831100 kubelet[1416]: I1213 14:07:31.830955 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-xtables-lock\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.831100 kubelet[1416]: I1213 14:07:31.830994 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-run\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.831195 kubelet[1416]: I1213 14:07:31.831100 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3cb2e938-3c39-4351-af74-19f67ce0d005-clustermesh-secrets\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.831195 kubelet[1416]: I1213 14:07:31.831128 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-config-path\") pod \"3cb2e938-3c39-4351-af74-19f67ce0d005\" (UID: \"3cb2e938-3c39-4351-af74-19f67ce0d005\") " Dec 13 14:07:31.831550 kubelet[1416]: I1213 14:07:31.831348 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831550 kubelet[1416]: I1213 14:07:31.831394 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831550 kubelet[1416]: I1213 14:07:31.831421 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831550 kubelet[1416]: I1213 14:07:31.831438 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831550 kubelet[1416]: I1213 14:07:31.831454 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831711 kubelet[1416]: I1213 14:07:31.831472 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831711 kubelet[1416]: I1213 14:07:31.831494 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cni-path" (OuterVolumeSpecName: "cni-path") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831711 kubelet[1416]: I1213 14:07:31.831512 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-hostproc" (OuterVolumeSpecName: "hostproc") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831711 kubelet[1416]: I1213 14:07:31.831531 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.831711 kubelet[1416]: I1213 14:07:31.831548 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.834167 kubelet[1416]: I1213 14:07:31.833938 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:07:31.835237 kubelet[1416]: I1213 14:07:31.834605 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb2e938-3c39-4351-af74-19f67ce0d005-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:07:31.835762 kubelet[1416]: I1213 14:07:31.835725 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb2e938-3c39-4351-af74-19f67ce0d005-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:31.836046 kubelet[1416]: I1213 14:07:31.836005 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb2e938-3c39-4351-af74-19f67ce0d005-kube-api-access-558mb" (OuterVolumeSpecName: "kube-api-access-558mb") pod "3cb2e938-3c39-4351-af74-19f67ce0d005" (UID: "3cb2e938-3c39-4351-af74-19f67ce0d005"). InnerVolumeSpecName "kube-api-access-558mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:31.836268 systemd[1]: var-lib-kubelet-pods-3cb2e938\x2d3c39\x2d4351\x2daf74\x2d19f67ce0d005-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:07:31.891230 kubelet[1416]: E1213 14:07:31.891141 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:31.931559 kubelet[1416]: I1213 14:07:31.931521 1416 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3cb2e938-3c39-4351-af74-19f67ce0d005-hubble-tls\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931559 kubelet[1416]: I1213 14:07:31.931555 1416 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-lib-modules\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931691 kubelet[1416]: I1213 14:07:31.931567 1416 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-host-proc-sys-net\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931691 kubelet[1416]: I1213 14:07:31.931582 1416 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-host-proc-sys-kernel\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931691 kubelet[1416]: I1213 14:07:31.931594 1416 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-etc-cni-netd\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931691 kubelet[1416]: I1213 14:07:31.931603 1416 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-cgroup\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931691 kubelet[1416]: I1213 14:07:31.931613 1416 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-558mb\" (UniqueName: \"kubernetes.io/projected/3cb2e938-3c39-4351-af74-19f67ce0d005-kube-api-access-558mb\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931691 kubelet[1416]: I1213 14:07:31.931622 1416 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cni-path\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931691 kubelet[1416]: I1213 14:07:31.931631 1416 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-hostproc\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931691 kubelet[1416]: I1213 14:07:31.931640 1416 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-bpf-maps\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931904 kubelet[1416]: I1213 14:07:31.931649 1416 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-xtables-lock\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931904 kubelet[1416]: I1213 14:07:31.931658 1416 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-run\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931904 kubelet[1416]: I1213 14:07:31.931668 1416 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3cb2e938-3c39-4351-af74-19f67ce0d005-clustermesh-secrets\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:31.931904 kubelet[1416]: I1213 14:07:31.931677 1416 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3cb2e938-3c39-4351-af74-19f67ce0d005-cilium-config-path\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:32.042741 systemd[1]: Removed slice kubepods-burstable-pod3cb2e938_3c39_4351_af74_19f67ce0d005.slice. Dec 13 14:07:32.042821 systemd[1]: kubepods-burstable-pod3cb2e938_3c39_4351_af74_19f67ce0d005.slice: Consumed 6.472s CPU time. Dec 13 14:07:32.140854 kubelet[1416]: I1213 14:07:32.140829 1416 scope.go:117] "RemoveContainer" containerID="2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a" Dec 13 14:07:32.142705 env[1209]: time="2024-12-13T14:07:32.142620963Z" level=info msg="RemoveContainer for \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\"" Dec 13 14:07:32.145532 env[1209]: time="2024-12-13T14:07:32.145498295Z" level=info msg="RemoveContainer for \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\" returns successfully" Dec 13 14:07:32.145794 kubelet[1416]: I1213 14:07:32.145725 1416 scope.go:117] "RemoveContainer" containerID="52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96" Dec 13 14:07:32.146604 env[1209]: time="2024-12-13T14:07:32.146577529Z" level=info msg="RemoveContainer for \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\"" Dec 13 14:07:32.148941 env[1209]: time="2024-12-13T14:07:32.148912723Z" level=info msg="RemoveContainer for \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\" returns successfully" Dec 13 14:07:32.149081 kubelet[1416]: I1213 14:07:32.149061 1416 scope.go:117] "RemoveContainer" containerID="1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963" Dec 13 14:07:32.149931 env[1209]: time="2024-12-13T14:07:32.149903954Z" level=info msg="RemoveContainer for \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\"" Dec 13 14:07:32.151964 env[1209]: time="2024-12-13T14:07:32.151926899Z" level=info msg="RemoveContainer for \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\" returns successfully" Dec 13 14:07:32.152105 kubelet[1416]: I1213 14:07:32.152085 1416 scope.go:117] "RemoveContainer" containerID="20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28" Dec 13 14:07:32.152875 env[1209]: time="2024-12-13T14:07:32.152851488Z" level=info msg="RemoveContainer for \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\"" Dec 13 14:07:32.155147 env[1209]: time="2024-12-13T14:07:32.155113240Z" level=info msg="RemoveContainer for \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\" returns successfully" Dec 13 14:07:32.155409 kubelet[1416]: I1213 14:07:32.155387 1416 scope.go:117] "RemoveContainer" containerID="b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d" Dec 13 14:07:32.156374 env[1209]: time="2024-12-13T14:07:32.156179553Z" level=info msg="RemoveContainer for \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\"" Dec 13 14:07:32.159282 env[1209]: time="2024-12-13T14:07:32.159241291Z" level=info msg="RemoveContainer for \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\" returns successfully" Dec 13 14:07:32.159432 kubelet[1416]: I1213 14:07:32.159413 1416 scope.go:117] "RemoveContainer" containerID="2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a" Dec 13 14:07:32.159717 env[1209]: time="2024-12-13T14:07:32.159636263Z" level=error msg="ContainerStatus for \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\": not found" Dec 13 14:07:32.159857 kubelet[1416]: E1213 14:07:32.159840 1416 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\": not found" containerID="2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a" Dec 13 14:07:32.159931 kubelet[1416]: I1213 14:07:32.159921 1416 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a"} err="failed to get container status \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2995fc519ce8dd48ca88c382f472410f205176e84fc7673a5e252a2b338bb03a\": not found" Dec 13 14:07:32.159959 kubelet[1416]: I1213 14:07:32.159936 1416 scope.go:117] "RemoveContainer" containerID="52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96" Dec 13 14:07:32.160119 env[1209]: time="2024-12-13T14:07:32.160070357Z" level=error msg="ContainerStatus for \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\": not found" Dec 13 14:07:32.160227 kubelet[1416]: E1213 14:07:32.160216 1416 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\": not found" containerID="52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96" Dec 13 14:07:32.160283 kubelet[1416]: I1213 14:07:32.160239 1416 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96"} err="failed to get container status \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\": rpc error: code = NotFound desc = an error occurred when try to find container \"52b6ef9e4a40dee4377df10e241073e633491b5327141e71f73a23d410f2bd96\": not found" Dec 13 14:07:32.160283 kubelet[1416]: I1213 14:07:32.160259 1416 scope.go:117] "RemoveContainer" containerID="1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963" Dec 13 14:07:32.160496 env[1209]: time="2024-12-13T14:07:32.160449609Z" level=error msg="ContainerStatus for \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\": not found" Dec 13 14:07:32.160663 kubelet[1416]: E1213 14:07:32.160648 1416 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\": not found" containerID="1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963" Dec 13 14:07:32.160710 kubelet[1416]: I1213 14:07:32.160675 1416 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963"} err="failed to get container status \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\": rpc error: code = NotFound desc = an error occurred when try to find container \"1554dbeaa86a7114a98500655e02998e228e5bb7163cd371c98d6909e44a0963\": not found" Dec 13 14:07:32.160710 kubelet[1416]: I1213 14:07:32.160685 1416 scope.go:117] "RemoveContainer" containerID="20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28" Dec 13 14:07:32.160825 env[1209]: time="2024-12-13T14:07:32.160788580Z" level=error msg="ContainerStatus for \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\": not found" Dec 13 14:07:32.160900 kubelet[1416]: E1213 14:07:32.160888 1416 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\": not found" containerID="20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28" Dec 13 14:07:32.160937 kubelet[1416]: I1213 14:07:32.160909 1416 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28"} err="failed to get container status \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\": rpc error: code = NotFound desc = an error occurred when try to find container \"20998a871bd670211f1a57ead808ef294930d4dae54e96c0330d44c2cad1ac28\": not found" Dec 13 14:07:32.160937 kubelet[1416]: I1213 14:07:32.160926 1416 scope.go:117] "RemoveContainer" containerID="b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d" Dec 13 14:07:32.161102 env[1209]: time="2024-12-13T14:07:32.161058428Z" level=error msg="ContainerStatus for \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\": not found" Dec 13 14:07:32.161190 kubelet[1416]: E1213 14:07:32.161173 1416 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\": not found" containerID="b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d" Dec 13 14:07:32.161246 kubelet[1416]: I1213 14:07:32.161214 1416 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d"} err="failed to get container status \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b255d548f07eae7c7ba371865d152844b5135a9a933457d12e92d9dd841deb1d\": not found" Dec 13 14:07:32.567326 systemd[1]: var-lib-kubelet-pods-3cb2e938\x2d3c39\x2d4351\x2daf74\x2d19f67ce0d005-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d558mb.mount: Deactivated successfully. Dec 13 14:07:32.567418 systemd[1]: var-lib-kubelet-pods-3cb2e938\x2d3c39\x2d4351\x2daf74\x2d19f67ce0d005-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:07:32.891950 kubelet[1416]: E1213 14:07:32.891912 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:33.892673 kubelet[1416]: E1213 14:07:33.892617 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:34.039961 kubelet[1416]: I1213 14:07:34.039928 1416 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3cb2e938-3c39-4351-af74-19f67ce0d005" path="/var/lib/kubelet/pods/3cb2e938-3c39-4351-af74-19f67ce0d005/volumes" Dec 13 14:07:34.691802 kubelet[1416]: I1213 14:07:34.691620 1416 topology_manager.go:215] "Topology Admit Handler" podUID="35f4c847-8f97-4856-923b-bc2a866f7456" podNamespace="kube-system" podName="cilium-52zdw" Dec 13 14:07:34.691802 kubelet[1416]: E1213 14:07:34.691751 1416 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3cb2e938-3c39-4351-af74-19f67ce0d005" containerName="apply-sysctl-overwrites" Dec 13 14:07:34.691802 kubelet[1416]: E1213 14:07:34.691783 1416 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3cb2e938-3c39-4351-af74-19f67ce0d005" containerName="mount-cgroup" Dec 13 14:07:34.691802 kubelet[1416]: E1213 14:07:34.691794 1416 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3cb2e938-3c39-4351-af74-19f67ce0d005" containerName="mount-bpf-fs" Dec 13 14:07:34.691802 kubelet[1416]: E1213 14:07:34.691804 1416 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3cb2e938-3c39-4351-af74-19f67ce0d005" containerName="clean-cilium-state" Dec 13 14:07:34.691802 kubelet[1416]: E1213 14:07:34.691811 1416 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3cb2e938-3c39-4351-af74-19f67ce0d005" containerName="cilium-agent" Dec 13 14:07:34.692079 kubelet[1416]: I1213 14:07:34.691833 1416 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb2e938-3c39-4351-af74-19f67ce0d005" containerName="cilium-agent" Dec 13 14:07:34.692079 kubelet[1416]: I1213 14:07:34.692046 1416 topology_manager.go:215] "Topology Admit Handler" podUID="c25f64dd-9928-4dec-8517-c462ad342fb1" podNamespace="kube-system" podName="cilium-operator-5cc964979-vzwdn" Dec 13 14:07:34.696673 systemd[1]: Created slice kubepods-besteffort-podc25f64dd_9928_4dec_8517_c462ad342fb1.slice. Dec 13 14:07:34.700177 systemd[1]: Created slice kubepods-burstable-pod35f4c847_8f97_4856_923b_bc2a866f7456.slice. Dec 13 14:07:34.744024 kubelet[1416]: I1213 14:07:34.743984 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cni-path\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744024 kubelet[1416]: I1213 14:07:34.744025 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35f4c847-8f97-4856-923b-bc2a866f7456-clustermesh-secrets\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744221 kubelet[1416]: I1213 14:07:34.744045 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-host-proc-sys-kernel\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744221 kubelet[1416]: I1213 14:07:34.744065 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35f4c847-8f97-4856-923b-bc2a866f7456-hubble-tls\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744221 kubelet[1416]: I1213 14:07:34.744108 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck5jc\" (UniqueName: \"kubernetes.io/projected/c25f64dd-9928-4dec-8517-c462ad342fb1-kube-api-access-ck5jc\") pod \"cilium-operator-5cc964979-vzwdn\" (UID: \"c25f64dd-9928-4dec-8517-c462ad342fb1\") " pod="kube-system/cilium-operator-5cc964979-vzwdn" Dec 13 14:07:34.744221 kubelet[1416]: I1213 14:07:34.744128 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-xtables-lock\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744221 kubelet[1416]: I1213 14:07:34.744145 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-host-proc-sys-net\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744366 kubelet[1416]: I1213 14:07:34.744164 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-lib-modules\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744366 kubelet[1416]: I1213 14:07:34.744187 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-config-path\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744366 kubelet[1416]: I1213 14:07:34.744219 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c25f64dd-9928-4dec-8517-c462ad342fb1-cilium-config-path\") pod \"cilium-operator-5cc964979-vzwdn\" (UID: \"c25f64dd-9928-4dec-8517-c462ad342fb1\") " pod="kube-system/cilium-operator-5cc964979-vzwdn" Dec 13 14:07:34.744366 kubelet[1416]: I1213 14:07:34.744239 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-ipsec-secrets\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744366 kubelet[1416]: I1213 14:07:34.744258 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-run\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744480 kubelet[1416]: I1213 14:07:34.744277 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-etc-cni-netd\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744480 kubelet[1416]: I1213 14:07:34.744297 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-bpf-maps\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744480 kubelet[1416]: I1213 14:07:34.744315 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-hostproc\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744480 kubelet[1416]: I1213 14:07:34.744343 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-cgroup\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.744480 kubelet[1416]: I1213 14:07:34.744363 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r72t\" (UniqueName: \"kubernetes.io/projected/35f4c847-8f97-4856-923b-bc2a866f7456-kube-api-access-7r72t\") pod \"cilium-52zdw\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " pod="kube-system/cilium-52zdw" Dec 13 14:07:34.855019 kubelet[1416]: E1213 14:07:34.854976 1416 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-7r72t], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-52zdw" podUID="35f4c847-8f97-4856-923b-bc2a866f7456" Dec 13 14:07:34.893245 kubelet[1416]: E1213 14:07:34.893179 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:34.994360 kubelet[1416]: E1213 14:07:34.994255 1416 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:07:34.998509 kubelet[1416]: E1213 14:07:34.998470 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:34.999037 env[1209]: time="2024-12-13T14:07:34.998988010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vzwdn,Uid:c25f64dd-9928-4dec-8517-c462ad342fb1,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:35.011960 env[1209]: time="2024-12-13T14:07:35.011894779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:35.011960 env[1209]: time="2024-12-13T14:07:35.011932700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:35.011960 env[1209]: time="2024-12-13T14:07:35.011942820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:35.012266 env[1209]: time="2024-12-13T14:07:35.012060064Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3204c40d9334972c787abe78df55af0e41fb8f835238275057f9a751a932a37 pid=2979 runtime=io.containerd.runc.v2 Dec 13 14:07:35.021742 systemd[1]: Started cri-containerd-c3204c40d9334972c787abe78df55af0e41fb8f835238275057f9a751a932a37.scope. Dec 13 14:07:35.060729 env[1209]: time="2024-12-13T14:07:35.060684127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vzwdn,Uid:c25f64dd-9928-4dec-8517-c462ad342fb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3204c40d9334972c787abe78df55af0e41fb8f835238275057f9a751a932a37\"" Dec 13 14:07:35.065219 kubelet[1416]: E1213 14:07:35.061662 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:35.066511 env[1209]: time="2024-12-13T14:07:35.066472612Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:07:35.248685 kubelet[1416]: I1213 14:07:35.248576 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35f4c847-8f97-4856-923b-bc2a866f7456-hubble-tls\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248685 kubelet[1416]: I1213 14:07:35.248617 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-etc-cni-netd\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248685 kubelet[1416]: I1213 14:07:35.248644 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cni-path\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248685 kubelet[1416]: I1213 14:07:35.248664 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-host-proc-sys-net\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248685 kubelet[1416]: I1213 14:07:35.248684 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-run\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248895 kubelet[1416]: I1213 14:07:35.248701 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-bpf-maps\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248895 kubelet[1416]: I1213 14:07:35.248724 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35f4c847-8f97-4856-923b-bc2a866f7456-clustermesh-secrets\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248895 kubelet[1416]: I1213 14:07:35.248748 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-config-path\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248895 kubelet[1416]: I1213 14:07:35.248734 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.248895 kubelet[1416]: I1213 14:07:35.248765 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-cgroup\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.248895 kubelet[1416]: I1213 14:07:35.248783 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-host-proc-sys-kernel\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.249044 kubelet[1416]: I1213 14:07:35.248804 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-lib-modules\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.249044 kubelet[1416]: I1213 14:07:35.248839 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-hostproc\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.249044 kubelet[1416]: I1213 14:07:35.248880 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-ipsec-secrets\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.249044 kubelet[1416]: I1213 14:07:35.248900 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r72t\" (UniqueName: \"kubernetes.io/projected/35f4c847-8f97-4856-923b-bc2a866f7456-kube-api-access-7r72t\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.249044 kubelet[1416]: I1213 14:07:35.248917 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-xtables-lock\") pod \"35f4c847-8f97-4856-923b-bc2a866f7456\" (UID: \"35f4c847-8f97-4856-923b-bc2a866f7456\") " Dec 13 14:07:35.249044 kubelet[1416]: I1213 14:07:35.248943 1416 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-etc-cni-netd\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.249179 kubelet[1416]: I1213 14:07:35.248973 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.249179 kubelet[1416]: I1213 14:07:35.248997 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cni-path" (OuterVolumeSpecName: "cni-path") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.249179 kubelet[1416]: I1213 14:07:35.249014 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.249179 kubelet[1416]: I1213 14:07:35.249028 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.249179 kubelet[1416]: I1213 14:07:35.249043 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.251226 kubelet[1416]: I1213 14:07:35.249376 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.251226 kubelet[1416]: I1213 14:07:35.249675 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-hostproc" (OuterVolumeSpecName: "hostproc") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.251226 kubelet[1416]: I1213 14:07:35.249711 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.251226 kubelet[1416]: I1213 14:07:35.249730 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.251750 kubelet[1416]: I1213 14:07:35.251711 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f4c847-8f97-4856-923b-bc2a866f7456-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:35.251825 kubelet[1416]: I1213 14:07:35.251806 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f4c847-8f97-4856-923b-bc2a866f7456-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:07:35.252015 kubelet[1416]: I1213 14:07:35.251989 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:07:35.252246 kubelet[1416]: I1213 14:07:35.252225 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:07:35.253780 kubelet[1416]: I1213 14:07:35.253746 1416 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f4c847-8f97-4856-923b-bc2a866f7456-kube-api-access-7r72t" (OuterVolumeSpecName: "kube-api-access-7r72t") pod "35f4c847-8f97-4856-923b-bc2a866f7456" (UID: "35f4c847-8f97-4856-923b-bc2a866f7456"). InnerVolumeSpecName "kube-api-access-7r72t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:35.349285 kubelet[1416]: I1213 14:07:35.349242 1416 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-hostproc\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349285 kubelet[1416]: I1213 14:07:35.349277 1416 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-cgroup\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349285 kubelet[1416]: I1213 14:07:35.349293 1416 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-host-proc-sys-kernel\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349449 kubelet[1416]: I1213 14:07:35.349304 1416 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-lib-modules\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349449 kubelet[1416]: I1213 14:07:35.349315 1416 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-ipsec-secrets\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349449 kubelet[1416]: I1213 14:07:35.349326 1416 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7r72t\" (UniqueName: \"kubernetes.io/projected/35f4c847-8f97-4856-923b-bc2a866f7456-kube-api-access-7r72t\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349449 kubelet[1416]: I1213 14:07:35.349342 1416 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-xtables-lock\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349449 kubelet[1416]: I1213 14:07:35.349355 1416 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35f4c847-8f97-4856-923b-bc2a866f7456-hubble-tls\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349449 kubelet[1416]: I1213 14:07:35.349364 1416 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cni-path\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349449 kubelet[1416]: I1213 14:07:35.349374 1416 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-host-proc-sys-net\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349449 kubelet[1416]: I1213 14:07:35.349383 1416 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-bpf-maps\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349628 kubelet[1416]: I1213 14:07:35.349393 1416 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-run\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349628 kubelet[1416]: I1213 14:07:35.349403 1416 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35f4c847-8f97-4856-923b-bc2a866f7456-clustermesh-secrets\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.349628 kubelet[1416]: I1213 14:07:35.349415 1416 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35f4c847-8f97-4856-923b-bc2a866f7456-cilium-config-path\") on node \"10.0.0.69\" DevicePath \"\"" Dec 13 14:07:35.849937 systemd[1]: var-lib-kubelet-pods-35f4c847\x2d8f97\x2d4856\x2d923b\x2dbc2a866f7456-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7r72t.mount: Deactivated successfully. Dec 13 14:07:35.850023 systemd[1]: var-lib-kubelet-pods-35f4c847\x2d8f97\x2d4856\x2d923b\x2dbc2a866f7456-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:07:35.850088 systemd[1]: var-lib-kubelet-pods-35f4c847\x2d8f97\x2d4856\x2d923b\x2dbc2a866f7456-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:07:35.850142 systemd[1]: var-lib-kubelet-pods-35f4c847\x2d8f97\x2d4856\x2d923b\x2dbc2a866f7456-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:07:35.894014 kubelet[1416]: E1213 14:07:35.893986 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:36.043046 systemd[1]: Removed slice kubepods-burstable-pod35f4c847_8f97_4856_923b_bc2a866f7456.slice. Dec 13 14:07:36.176514 kubelet[1416]: I1213 14:07:36.176415 1416 topology_manager.go:215] "Topology Admit Handler" podUID="ff2f87d5-a84d-4f39-9994-711c0eb0b723" podNamespace="kube-system" podName="cilium-qqstb" Dec 13 14:07:36.181946 systemd[1]: Created slice kubepods-burstable-podff2f87d5_a84d_4f39_9994_711c0eb0b723.slice. Dec 13 14:07:36.254192 kubelet[1416]: I1213 14:07:36.254154 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-hostproc\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254192 kubelet[1416]: I1213 14:07:36.254193 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-etc-cni-netd\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254357 kubelet[1416]: I1213 14:07:36.254223 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-lib-modules\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254357 kubelet[1416]: I1213 14:07:36.254250 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff2f87d5-a84d-4f39-9994-711c0eb0b723-cilium-config-path\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254434 kubelet[1416]: I1213 14:07:36.254344 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4ls6\" (UniqueName: \"kubernetes.io/projected/ff2f87d5-a84d-4f39-9994-711c0eb0b723-kube-api-access-d4ls6\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254460 kubelet[1416]: I1213 14:07:36.254439 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-bpf-maps\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254485 kubelet[1416]: I1213 14:07:36.254475 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-host-proc-sys-kernel\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254521 kubelet[1416]: I1213 14:07:36.254497 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-cilium-run\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254557 kubelet[1416]: I1213 14:07:36.254544 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff2f87d5-a84d-4f39-9994-711c0eb0b723-hubble-tls\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254616 kubelet[1416]: I1213 14:07:36.254596 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-cilium-cgroup\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254651 kubelet[1416]: I1213 14:07:36.254624 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ff2f87d5-a84d-4f39-9994-711c0eb0b723-cilium-ipsec-secrets\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254651 kubelet[1416]: I1213 14:07:36.254645 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-xtables-lock\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254701 kubelet[1416]: I1213 14:07:36.254689 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-host-proc-sys-net\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254727 kubelet[1416]: I1213 14:07:36.254720 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff2f87d5-a84d-4f39-9994-711c0eb0b723-cni-path\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.254763 kubelet[1416]: I1213 14:07:36.254753 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff2f87d5-a84d-4f39-9994-711c0eb0b723-clustermesh-secrets\") pod \"cilium-qqstb\" (UID: \"ff2f87d5-a84d-4f39-9994-711c0eb0b723\") " pod="kube-system/cilium-qqstb" Dec 13 14:07:36.495476 kubelet[1416]: E1213 14:07:36.495374 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:36.496380 env[1209]: time="2024-12-13T14:07:36.496332842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qqstb,Uid:ff2f87d5-a84d-4f39-9994-711c0eb0b723,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:36.509442 env[1209]: time="2024-12-13T14:07:36.509375201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:36.509442 env[1209]: time="2024-12-13T14:07:36.509414882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:36.509442 env[1209]: time="2024-12-13T14:07:36.509425242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:36.509637 env[1209]: time="2024-12-13T14:07:36.509537885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4 pid=3028 runtime=io.containerd.runc.v2 Dec 13 14:07:36.518797 systemd[1]: Started cri-containerd-03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4.scope. Dec 13 14:07:36.546720 env[1209]: time="2024-12-13T14:07:36.546664266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qqstb,Uid:ff2f87d5-a84d-4f39-9994-711c0eb0b723,Namespace:kube-system,Attempt:0,} returns sandbox id \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\"" Dec 13 14:07:36.547299 kubelet[1416]: E1213 14:07:36.547269 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:36.549966 env[1209]: time="2024-12-13T14:07:36.549928236Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:07:36.559138 env[1209]: time="2024-12-13T14:07:36.559097888Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e264c6cdec3ca1301778dc826bffc843ac9e6d0d0429b14fddf69e7df7811e56\"" Dec 13 14:07:36.559670 env[1209]: time="2024-12-13T14:07:36.559639263Z" level=info msg="StartContainer for \"e264c6cdec3ca1301778dc826bffc843ac9e6d0d0429b14fddf69e7df7811e56\"" Dec 13 14:07:36.572538 systemd[1]: Started cri-containerd-e264c6cdec3ca1301778dc826bffc843ac9e6d0d0429b14fddf69e7df7811e56.scope. Dec 13 14:07:36.610383 env[1209]: time="2024-12-13T14:07:36.610332298Z" level=info msg="StartContainer for \"e264c6cdec3ca1301778dc826bffc843ac9e6d0d0429b14fddf69e7df7811e56\" returns successfully" Dec 13 14:07:36.614127 systemd[1]: cri-containerd-e264c6cdec3ca1301778dc826bffc843ac9e6d0d0429b14fddf69e7df7811e56.scope: Deactivated successfully. Dec 13 14:07:36.637233 env[1209]: time="2024-12-13T14:07:36.637163476Z" level=info msg="shim disconnected" id=e264c6cdec3ca1301778dc826bffc843ac9e6d0d0429b14fddf69e7df7811e56 Dec 13 14:07:36.637473 env[1209]: time="2024-12-13T14:07:36.637247518Z" level=warning msg="cleaning up after shim disconnected" id=e264c6cdec3ca1301778dc826bffc843ac9e6d0d0429b14fddf69e7df7811e56 namespace=k8s.io Dec 13 14:07:36.637473 env[1209]: time="2024-12-13T14:07:36.637261278Z" level=info msg="cleaning up dead shim" Dec 13 14:07:36.643668 env[1209]: time="2024-12-13T14:07:36.643609413Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3112 runtime=io.containerd.runc.v2\n" Dec 13 14:07:36.894407 kubelet[1416]: E1213 14:07:36.894366 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:37.150994 kubelet[1416]: E1213 14:07:37.150911 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:37.153696 env[1209]: time="2024-12-13T14:07:37.153230858Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:07:37.162136 env[1209]: time="2024-12-13T14:07:37.162102774Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503\"" Dec 13 14:07:37.163824 env[1209]: time="2024-12-13T14:07:37.163772378Z" level=info msg="StartContainer for \"0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503\"" Dec 13 14:07:37.180109 systemd[1]: Started cri-containerd-0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503.scope. Dec 13 14:07:37.213149 env[1209]: time="2024-12-13T14:07:37.213107251Z" level=info msg="StartContainer for \"0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503\" returns successfully" Dec 13 14:07:37.217172 systemd[1]: cri-containerd-0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503.scope: Deactivated successfully. Dec 13 14:07:37.233369 env[1209]: time="2024-12-13T14:07:37.233321669Z" level=info msg="shim disconnected" id=0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503 Dec 13 14:07:37.233524 env[1209]: time="2024-12-13T14:07:37.233370271Z" level=warning msg="cleaning up after shim disconnected" id=0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503 namespace=k8s.io Dec 13 14:07:37.233524 env[1209]: time="2024-12-13T14:07:37.233380751Z" level=info msg="cleaning up dead shim" Dec 13 14:07:37.239447 env[1209]: time="2024-12-13T14:07:37.239416311Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3173 runtime=io.containerd.runc.v2\n" Dec 13 14:07:37.848914 systemd[1]: run-containerd-runc-k8s.io-0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503-runc.ngHOOU.mount: Deactivated successfully. Dec 13 14:07:37.849005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d600b734e7e8fb8a4776e51a1a5f9b7160028bc0d2c4b588f0f1e171309f503-rootfs.mount: Deactivated successfully. Dec 13 14:07:37.894792 kubelet[1416]: E1213 14:07:37.894750 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:38.039873 kubelet[1416]: I1213 14:07:38.039836 1416 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="35f4c847-8f97-4856-923b-bc2a866f7456" path="/var/lib/kubelet/pods/35f4c847-8f97-4856-923b-bc2a866f7456/volumes" Dec 13 14:07:38.154426 kubelet[1416]: E1213 14:07:38.154304 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:38.156274 env[1209]: time="2024-12-13T14:07:38.156233780Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:07:38.170945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560742485.mount: Deactivated successfully. Dec 13 14:07:38.174340 env[1209]: time="2024-12-13T14:07:38.174291245Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"433fd9d8b8662822e6c9e5d339bf8febff973a77eb159b334ddb2e0c479030b7\"" Dec 13 14:07:38.175038 env[1209]: time="2024-12-13T14:07:38.175012864Z" level=info msg="StartContainer for \"433fd9d8b8662822e6c9e5d339bf8febff973a77eb159b334ddb2e0c479030b7\"" Dec 13 14:07:38.191752 systemd[1]: Started cri-containerd-433fd9d8b8662822e6c9e5d339bf8febff973a77eb159b334ddb2e0c479030b7.scope. Dec 13 14:07:38.225983 systemd[1]: cri-containerd-433fd9d8b8662822e6c9e5d339bf8febff973a77eb159b334ddb2e0c479030b7.scope: Deactivated successfully. Dec 13 14:07:38.226731 env[1209]: time="2024-12-13T14:07:38.226694676Z" level=info msg="StartContainer for \"433fd9d8b8662822e6c9e5d339bf8febff973a77eb159b334ddb2e0c479030b7\" returns successfully" Dec 13 14:07:38.247077 env[1209]: time="2024-12-13T14:07:38.247033440Z" level=info msg="shim disconnected" id=433fd9d8b8662822e6c9e5d339bf8febff973a77eb159b334ddb2e0c479030b7 Dec 13 14:07:38.247077 env[1209]: time="2024-12-13T14:07:38.247078641Z" level=warning msg="cleaning up after shim disconnected" id=433fd9d8b8662822e6c9e5d339bf8febff973a77eb159b334ddb2e0c479030b7 namespace=k8s.io Dec 13 14:07:38.247334 env[1209]: time="2024-12-13T14:07:38.247089561Z" level=info msg="cleaning up dead shim" Dec 13 14:07:38.253579 env[1209]: time="2024-12-13T14:07:38.253539408Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3232 runtime=io.containerd.runc.v2\n" Dec 13 14:07:38.848956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-433fd9d8b8662822e6c9e5d339bf8febff973a77eb159b334ddb2e0c479030b7-rootfs.mount: Deactivated successfully. Dec 13 14:07:38.895551 kubelet[1416]: E1213 14:07:38.895505 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:39.158579 kubelet[1416]: E1213 14:07:39.158362 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:39.160122 env[1209]: time="2024-12-13T14:07:39.160060698Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:07:39.173571 env[1209]: time="2024-12-13T14:07:39.173520915Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19\"" Dec 13 14:07:39.174170 env[1209]: time="2024-12-13T14:07:39.174130690Z" level=info msg="StartContainer for \"ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19\"" Dec 13 14:07:39.189818 systemd[1]: Started cri-containerd-ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19.scope. Dec 13 14:07:39.217288 systemd[1]: cri-containerd-ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19.scope: Deactivated successfully. Dec 13 14:07:39.218057 env[1209]: time="2024-12-13T14:07:39.218018306Z" level=info msg="StartContainer for \"ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19\" returns successfully" Dec 13 14:07:39.236316 env[1209]: time="2024-12-13T14:07:39.236269763Z" level=info msg="shim disconnected" id=ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19 Dec 13 14:07:39.236316 env[1209]: time="2024-12-13T14:07:39.236317524Z" level=warning msg="cleaning up after shim disconnected" id=ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19 namespace=k8s.io Dec 13 14:07:39.236558 env[1209]: time="2024-12-13T14:07:39.236327324Z" level=info msg="cleaning up dead shim" Dec 13 14:07:39.242728 env[1209]: time="2024-12-13T14:07:39.242694443Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3288 runtime=io.containerd.runc.v2\n" Dec 13 14:07:39.849016 systemd[1]: run-containerd-runc-k8s.io-ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19-runc.DbFUeH.mount: Deactivated successfully. Dec 13 14:07:39.849109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed1afd6facdf7690587e9180ebea62c7b028bdb8f739d99054f5a3733a198c19-rootfs.mount: Deactivated successfully. Dec 13 14:07:39.857239 kubelet[1416]: E1213 14:07:39.857167 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:39.869835 env[1209]: time="2024-12-13T14:07:39.869788674Z" level=info msg="StopPodSandbox for \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\"" Dec 13 14:07:39.869935 env[1209]: time="2024-12-13T14:07:39.869873796Z" level=info msg="TearDown network for sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" successfully" Dec 13 14:07:39.869935 env[1209]: time="2024-12-13T14:07:39.869907077Z" level=info msg="StopPodSandbox for \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" returns successfully" Dec 13 14:07:39.870272 env[1209]: time="2024-12-13T14:07:39.870244725Z" level=info msg="RemovePodSandbox for \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\"" Dec 13 14:07:39.870313 env[1209]: time="2024-12-13T14:07:39.870276206Z" level=info msg="Forcibly stopping sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\"" Dec 13 14:07:39.870346 env[1209]: time="2024-12-13T14:07:39.870331327Z" level=info msg="TearDown network for sandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" successfully" Dec 13 14:07:39.872866 env[1209]: time="2024-12-13T14:07:39.872827430Z" level=info msg="RemovePodSandbox \"d6c6bfe627a138ab23e1d103338e41d9330c22e41bc5173b8e6f2f614f324858\" returns successfully" Dec 13 14:07:39.896503 kubelet[1416]: E1213 14:07:39.896457 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:39.995045 kubelet[1416]: E1213 14:07:39.995009 1416 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:07:40.147025 env[1209]: time="2024-12-13T14:07:40.146637204Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.147958 env[1209]: time="2024-12-13T14:07:40.147927995Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.149546 env[1209]: time="2024-12-13T14:07:40.149522434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.150162 env[1209]: time="2024-12-13T14:07:40.150135369Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:07:40.152178 env[1209]: time="2024-12-13T14:07:40.152143218Z" level=info msg="CreateContainer within sandbox \"c3204c40d9334972c787abe78df55af0e41fb8f835238275057f9a751a932a37\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:07:40.163740 env[1209]: time="2024-12-13T14:07:40.163696298Z" level=info msg="CreateContainer within sandbox \"c3204c40d9334972c787abe78df55af0e41fb8f835238275057f9a751a932a37\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"879fb065ab7417ecb4993f22839084ccf6f0571e4c608ee17dbbf3cb0566aaf4\"" Dec 13 14:07:40.164176 env[1209]: time="2024-12-13T14:07:40.164141029Z" level=info msg="StartContainer for \"879fb065ab7417ecb4993f22839084ccf6f0571e4c608ee17dbbf3cb0566aaf4\"" Dec 13 14:07:40.164983 kubelet[1416]: E1213 14:07:40.164821 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:40.166670 env[1209]: time="2024-12-13T14:07:40.166623689Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:07:40.180825 env[1209]: time="2024-12-13T14:07:40.180777192Z" level=info msg="CreateContainer within sandbox \"03421035dce605fa64f8900dc574f36555106a9a5f1d8d2487e900e3dddb88f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f16156d79b77e10d812679cc1d588d74dea5e26334c9f4113e11964d9e8e19ad\"" Dec 13 14:07:40.181279 env[1209]: time="2024-12-13T14:07:40.181241203Z" level=info msg="StartContainer for \"f16156d79b77e10d812679cc1d588d74dea5e26334c9f4113e11964d9e8e19ad\"" Dec 13 14:07:40.186662 systemd[1]: Started cri-containerd-879fb065ab7417ecb4993f22839084ccf6f0571e4c608ee17dbbf3cb0566aaf4.scope. Dec 13 14:07:40.204984 systemd[1]: Started cri-containerd-f16156d79b77e10d812679cc1d588d74dea5e26334c9f4113e11964d9e8e19ad.scope. Dec 13 14:07:40.270580 env[1209]: time="2024-12-13T14:07:40.270510528Z" level=info msg="StartContainer for \"879fb065ab7417ecb4993f22839084ccf6f0571e4c608ee17dbbf3cb0566aaf4\" returns successfully" Dec 13 14:07:40.271834 env[1209]: time="2024-12-13T14:07:40.271787119Z" level=info msg="StartContainer for \"f16156d79b77e10d812679cc1d588d74dea5e26334c9f4113e11964d9e8e19ad\" returns successfully" Dec 13 14:07:40.494221 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:07:40.897637 kubelet[1416]: E1213 14:07:40.897584 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:41.103813 kubelet[1416]: I1213 14:07:41.103780 1416 setters.go:568] "Node became not ready" node="10.0.0.69" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:07:41Z","lastTransitionTime":"2024-12-13T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:07:41.167878 kubelet[1416]: E1213 14:07:41.167773 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:41.171439 kubelet[1416]: E1213 14:07:41.171416 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:41.189742 kubelet[1416]: I1213 14:07:41.189705 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qqstb" podStartSLOduration=5.189668051 podStartE2EDuration="5.189668051s" podCreationTimestamp="2024-12-13 14:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:07:41.189005155 +0000 UTC m=+62.262198284" watchObservedRunningTime="2024-12-13 14:07:41.189668051 +0000 UTC m=+62.262861180" Dec 13 14:07:41.190014 kubelet[1416]: I1213 14:07:41.189986 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-vzwdn" podStartSLOduration=2.105158071 podStartE2EDuration="7.189964538s" podCreationTimestamp="2024-12-13 14:07:34 +0000 UTC" firstStartedPulling="2024-12-13 14:07:35.066028119 +0000 UTC m=+56.139221248" lastFinishedPulling="2024-12-13 14:07:40.150834586 +0000 UTC m=+61.224027715" observedRunningTime="2024-12-13 14:07:41.175441076 +0000 UTC m=+62.248634165" watchObservedRunningTime="2024-12-13 14:07:41.189964538 +0000 UTC m=+62.263157667" Dec 13 14:07:41.897997 kubelet[1416]: E1213 14:07:41.897943 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:42.176416 kubelet[1416]: E1213 14:07:42.176308 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:42.496996 kubelet[1416]: E1213 14:07:42.496859 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:42.898848 kubelet[1416]: E1213 14:07:42.898811 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:43.140341 systemd[1]: run-containerd-runc-k8s.io-f16156d79b77e10d812679cc1d588d74dea5e26334c9f4113e11964d9e8e19ad-runc.ZsUyDN.mount: Deactivated successfully. Dec 13 14:07:43.246751 systemd-networkd[1040]: lxc_health: Link UP Dec 13 14:07:43.263222 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:07:43.263398 systemd-networkd[1040]: lxc_health: Gained carrier Dec 13 14:07:43.899343 kubelet[1416]: E1213 14:07:43.899291 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:44.497385 kubelet[1416]: E1213 14:07:44.497351 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:44.899389 kubelet[1416]: E1213 14:07:44.899360 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:45.094350 systemd-networkd[1040]: lxc_health: Gained IPv6LL Dec 13 14:07:45.181672 kubelet[1416]: E1213 14:07:45.181587 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:45.276913 systemd[1]: run-containerd-runc-k8s.io-f16156d79b77e10d812679cc1d588d74dea5e26334c9f4113e11964d9e8e19ad-runc.oWIa5O.mount: Deactivated successfully. Dec 13 14:07:45.900011 kubelet[1416]: E1213 14:07:45.899951 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:46.182989 kubelet[1416]: E1213 14:07:46.182695 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:46.900262 kubelet[1416]: E1213 14:07:46.900215 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:47.401480 systemd[1]: run-containerd-runc-k8s.io-f16156d79b77e10d812679cc1d588d74dea5e26334c9f4113e11964d9e8e19ad-runc.2EGee2.mount: Deactivated successfully. Dec 13 14:07:47.901097 kubelet[1416]: E1213 14:07:47.901056 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:48.901727 kubelet[1416]: E1213 14:07:48.901669 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:49.523069 systemd[1]: run-containerd-runc-k8s.io-f16156d79b77e10d812679cc1d588d74dea5e26334c9f4113e11964d9e8e19ad-runc.itWJiD.mount: Deactivated successfully. Dec 13 14:07:49.902159 kubelet[1416]: E1213 14:07:49.902108 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:07:50.902846 kubelet[1416]: E1213 14:07:50.902795 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"