Dec 13 14:05:35.728870 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:05:35.728890 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:05:35.728898 kernel: efi: EFI v2.70 by EDK II Dec 13 14:05:35.728903 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Dec 13 14:05:35.728909 kernel: random: crng init done Dec 13 14:05:35.728914 kernel: ACPI: Early table checksum verification disabled Dec 13 14:05:35.728921 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Dec 13 14:05:35.728927 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:05:35.728933 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728939 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728944 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728950 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728956 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728961 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728969 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728975 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728981 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:05:35.728987 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 14:05:35.728999 kernel: NUMA: Failed to initialise from firmware Dec 13 14:05:35.729008 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:05:35.729014 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Dec 13 14:05:35.729020 kernel: Zone ranges: Dec 13 14:05:35.729026 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:05:35.729032 kernel: DMA32 empty Dec 13 14:05:35.729038 kernel: Normal empty Dec 13 14:05:35.729043 kernel: Movable zone start for each node Dec 13 14:05:35.729049 kernel: Early memory node ranges Dec 13 14:05:35.729070 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Dec 13 14:05:35.729077 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Dec 13 14:05:35.729090 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Dec 13 14:05:35.729096 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Dec 13 14:05:35.729102 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Dec 13 14:05:35.729107 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Dec 13 14:05:35.729113 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Dec 13 14:05:35.729119 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:05:35.729127 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 14:05:35.729132 kernel: psci: probing for conduit method from ACPI. Dec 13 14:05:35.729138 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:05:35.729143 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:05:35.729149 kernel: psci: Trusted OS migration not required Dec 13 14:05:35.729160 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:05:35.729166 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 14:05:35.729173 kernel: ACPI: SRAT not present Dec 13 14:05:35.729180 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:05:35.729186 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:05:35.729192 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 14:05:35.729198 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:05:35.729207 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:05:35.729213 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:05:35.729219 kernel: CPU features: detected: Spectre-v4 Dec 13 14:05:35.729225 kernel: CPU features: detected: Spectre-BHB Dec 13 14:05:35.729233 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:05:35.729239 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:05:35.729247 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:05:35.729253 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:05:35.729260 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 14:05:35.729266 kernel: Policy zone: DMA Dec 13 14:05:35.729273 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:05:35.729282 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:05:35.729288 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:05:35.729294 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:05:35.729301 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:05:35.729310 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) Dec 13 14:05:35.729318 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:05:35.729324 kernel: trace event string verifier disabled Dec 13 14:05:35.729331 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:05:35.729337 kernel: rcu: RCU event tracing is enabled. Dec 13 14:05:35.729343 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:05:35.729349 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:05:35.729356 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:05:35.729362 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:05:35.729368 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:05:35.729374 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:05:35.729382 kernel: GICv3: 256 SPIs implemented Dec 13 14:05:35.729390 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:05:35.729397 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:05:35.729402 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:05:35.729414 kernel: GICv3: 16 PPIs implemented Dec 13 14:05:35.729420 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 14:05:35.729426 kernel: ACPI: SRAT not present Dec 13 14:05:35.729432 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 14:05:35.729438 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:05:35.729444 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:05:35.729451 kernel: GICv3: using LPI property table @0x00000000400d0000 Dec 13 14:05:35.729457 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Dec 13 14:05:35.729465 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:35.729471 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:05:35.729479 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:05:35.729486 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:05:35.729492 kernel: arm-pv: using stolen time PV Dec 13 14:05:35.729498 kernel: Console: colour dummy device 80x25 Dec 13 14:05:35.729504 kernel: ACPI: Core revision 20210730 Dec 13 14:05:35.729511 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:05:35.729517 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:05:35.729523 kernel: LSM: Security Framework initializing Dec 13 14:05:35.729532 kernel: SELinux: Initializing. Dec 13 14:05:35.729540 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:05:35.729562 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:05:35.729569 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:05:35.729575 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 14:05:35.729582 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 14:05:35.729588 kernel: Remapping and enabling EFI services. Dec 13 14:05:35.729594 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:05:35.729603 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:05:35.729611 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 14:05:35.729617 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Dec 13 14:05:35.729624 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:35.729630 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:05:35.729636 kernel: Detected PIPT I-cache on CPU2 Dec 13 14:05:35.729642 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 14:05:35.729649 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Dec 13 14:05:35.729655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:35.729661 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 14:05:35.729667 kernel: Detected PIPT I-cache on CPU3 Dec 13 14:05:35.729674 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 14:05:35.729681 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Dec 13 14:05:35.729687 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:35.729694 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 14:05:35.729704 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:05:35.729712 kernel: SMP: Total of 4 processors activated. Dec 13 14:05:35.729718 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:05:35.729725 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:05:35.729731 kernel: CPU features: detected: Common not Private translations Dec 13 14:05:35.729738 kernel: CPU features: detected: CRC32 instructions Dec 13 14:05:35.729744 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:05:35.729750 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:05:35.729758 kernel: CPU features: detected: Privileged Access Never Dec 13 14:05:35.729765 kernel: CPU features: detected: RAS Extension Support Dec 13 14:05:35.729772 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:05:35.729778 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:05:35.729785 kernel: alternatives: patching kernel code Dec 13 14:05:35.729792 kernel: devtmpfs: initialized Dec 13 14:05:35.729799 kernel: KASLR enabled Dec 13 14:05:35.729805 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:05:35.729812 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:05:35.729818 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:05:35.729825 kernel: SMBIOS 3.0.0 present. Dec 13 14:05:35.729831 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 13 14:05:35.729838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:05:35.729845 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:05:35.729852 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:05:35.729859 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:05:35.729866 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:05:35.729872 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Dec 13 14:05:35.729879 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:05:35.729885 kernel: cpuidle: using governor menu Dec 13 14:05:35.729892 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:05:35.729898 kernel: ASID allocator initialised with 32768 entries Dec 13 14:05:35.729905 kernel: ACPI: bus type PCI registered Dec 13 14:05:35.729912 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:05:35.729919 kernel: Serial: AMBA PL011 UART driver Dec 13 14:05:35.729925 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:05:35.729932 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:05:35.729938 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:05:35.729945 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:05:35.729951 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:05:35.729958 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:05:35.729964 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:05:35.729972 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:05:35.729979 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:05:35.729985 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:05:35.729992 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:05:35.729998 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:05:35.730005 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:05:35.730011 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:05:35.730018 kernel: ACPI: Interpreter enabled Dec 13 14:05:35.730024 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:05:35.730031 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:05:35.730038 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:05:35.730045 kernel: printk: console [ttyAMA0] enabled Dec 13 14:05:35.730051 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:05:35.730172 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:05:35.730234 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:05:35.730296 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:05:35.730360 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 14:05:35.730422 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 14:05:35.730432 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 14:05:35.730438 kernel: PCI host bridge to bus 0000:00 Dec 13 14:05:35.730502 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 14:05:35.730555 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:05:35.730607 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 14:05:35.730657 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:05:35.730733 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 14:05:35.730800 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:05:35.730860 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 14:05:35.730918 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 14:05:35.730976 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:05:35.731038 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:05:35.731136 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 14:05:35.733597 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 14:05:35.733657 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 14:05:35.733708 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:05:35.733760 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 14:05:35.733769 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:05:35.733776 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:05:35.733783 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:05:35.733794 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:05:35.733801 kernel: iommu: Default domain type: Translated Dec 13 14:05:35.733808 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:05:35.733814 kernel: vgaarb: loaded Dec 13 14:05:35.733821 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:05:35.733828 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:05:35.733834 kernel: PTP clock support registered Dec 13 14:05:35.733841 kernel: Registered efivars operations Dec 13 14:05:35.733847 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:05:35.733855 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:05:35.733862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:05:35.733869 kernel: pnp: PnP ACPI init Dec 13 14:05:35.733936 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 14:05:35.733946 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:05:35.733953 kernel: NET: Registered PF_INET protocol family Dec 13 14:05:35.733959 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:05:35.733966 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:05:35.733974 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:05:35.733981 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:05:35.733987 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:05:35.733994 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:05:35.734001 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:05:35.734007 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:05:35.734014 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:05:35.734020 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:05:35.734027 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:05:35.734035 kernel: kvm [1]: HYP mode not available Dec 13 14:05:35.734041 kernel: Initialise system trusted keyrings Dec 13 14:05:35.734048 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:05:35.734065 kernel: Key type asymmetric registered Dec 13 14:05:35.734072 kernel: Asymmetric key parser 'x509' registered Dec 13 14:05:35.734079 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:05:35.734086 kernel: io scheduler mq-deadline registered Dec 13 14:05:35.734092 kernel: io scheduler kyber registered Dec 13 14:05:35.734099 kernel: io scheduler bfq registered Dec 13 14:05:35.734107 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:05:35.734114 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:05:35.734121 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:05:35.734182 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 14:05:35.734192 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:05:35.734198 kernel: thunder_xcv, ver 1.0 Dec 13 14:05:35.734205 kernel: thunder_bgx, ver 1.0 Dec 13 14:05:35.734211 kernel: nicpf, ver 1.0 Dec 13 14:05:35.734217 kernel: nicvf, ver 1.0 Dec 13 14:05:35.734285 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:05:35.734340 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:05:35 UTC (1734098735) Dec 13 14:05:35.734349 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:05:35.734355 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:05:35.734362 kernel: Segment Routing with IPv6 Dec 13 14:05:35.734369 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:05:35.734375 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:05:35.734382 kernel: Key type dns_resolver registered Dec 13 14:05:35.734390 kernel: registered taskstats version 1 Dec 13 14:05:35.734397 kernel: Loading compiled-in X.509 certificates Dec 13 14:05:35.734403 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:05:35.734419 kernel: Key type .fscrypt registered Dec 13 14:05:35.734425 kernel: Key type fscrypt-provisioning registered Dec 13 14:05:35.734432 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:05:35.734438 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:05:35.734445 kernel: ima: No architecture policies found Dec 13 14:05:35.734451 kernel: clk: Disabling unused clocks Dec 13 14:05:35.734460 kernel: Freeing unused kernel memory: 36416K Dec 13 14:05:35.734466 kernel: Run /init as init process Dec 13 14:05:35.734473 kernel: with arguments: Dec 13 14:05:35.734479 kernel: /init Dec 13 14:05:35.734485 kernel: with environment: Dec 13 14:05:35.734492 kernel: HOME=/ Dec 13 14:05:35.734498 kernel: TERM=linux Dec 13 14:05:35.734505 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:05:35.734513 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:05:35.734523 systemd[1]: Detected virtualization kvm. Dec 13 14:05:35.734530 systemd[1]: Detected architecture arm64. Dec 13 14:05:35.734537 systemd[1]: Running in initrd. Dec 13 14:05:35.734544 systemd[1]: No hostname configured, using default hostname. Dec 13 14:05:35.734551 systemd[1]: Hostname set to . Dec 13 14:05:35.734558 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:05:35.734565 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:05:35.734573 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:05:35.734580 systemd[1]: Reached target cryptsetup.target. Dec 13 14:05:35.734587 systemd[1]: Reached target paths.target. Dec 13 14:05:35.734594 systemd[1]: Reached target slices.target. Dec 13 14:05:35.734601 systemd[1]: Reached target swap.target. Dec 13 14:05:35.734608 systemd[1]: Reached target timers.target. Dec 13 14:05:35.734615 systemd[1]: Listening on iscsid.socket. Dec 13 14:05:35.734624 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:05:35.734631 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:05:35.734638 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:05:35.734645 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:05:35.734652 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:05:35.734659 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:05:35.734666 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:05:35.734673 systemd[1]: Reached target sockets.target. Dec 13 14:05:35.734680 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:05:35.734688 systemd[1]: Finished network-cleanup.service. Dec 13 14:05:35.734695 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:05:35.734702 systemd[1]: Starting systemd-journald.service... Dec 13 14:05:35.734709 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:05:35.734716 systemd[1]: Starting systemd-resolved.service... Dec 13 14:05:35.734723 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:05:35.734730 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:05:35.734737 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:05:35.734744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:05:35.734753 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:05:35.734760 kernel: audit: type=1130 audit(1734098735.731:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.734767 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:05:35.734777 systemd-journald[289]: Journal started Dec 13 14:05:35.734819 systemd-journald[289]: Runtime Journal (/run/log/journal/aa15876634f447f29713f494f92485fe) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:05:35.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.724552 systemd-modules-load[290]: Inserted module 'overlay' Dec 13 14:05:35.736418 systemd[1]: Started systemd-journald.service. Dec 13 14:05:35.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.739249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:05:35.740277 kernel: audit: type=1130 audit(1734098735.737:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.744228 kernel: audit: type=1130 audit(1734098735.740:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.751090 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:05:35.754217 systemd-resolved[291]: Positive Trust Anchors: Dec 13 14:05:35.755208 kernel: Bridge firewalling registered Dec 13 14:05:35.754231 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:05:35.754260 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:05:35.765193 kernel: audit: type=1130 audit(1734098735.757:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.754534 systemd-modules-load[290]: Inserted module 'br_netfilter' Dec 13 14:05:35.755346 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:05:35.757869 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:05:35.768728 kernel: SCSI subsystem initialized Dec 13 14:05:35.758370 systemd-resolved[291]: Defaulting to hostname 'linux'. Dec 13 14:05:35.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.768227 systemd[1]: Started systemd-resolved.service. Dec 13 14:05:35.773364 kernel: audit: type=1130 audit(1734098735.769:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.772372 systemd[1]: Reached target nss-lookup.target. Dec 13 14:05:35.776879 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:05:35.776895 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:05:35.776909 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:05:35.776918 dracut-cmdline[307]: dracut-dracut-053 Dec 13 14:05:35.778756 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:05:35.783095 systemd-modules-load[290]: Inserted module 'dm_multipath' Dec 13 14:05:35.783828 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:05:35.785351 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:05:35.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.789071 kernel: audit: type=1130 audit(1734098735.784:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.791852 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:05:35.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.796092 kernel: audit: type=1130 audit(1734098735.792:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.843082 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:05:35.855089 kernel: iscsi: registered transport (tcp) Dec 13 14:05:35.872070 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:05:35.872089 kernel: QLogic iSCSI HBA Driver Dec 13 14:05:35.906825 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:05:35.908454 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:05:35.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.912122 kernel: audit: type=1130 audit(1734098735.907:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.954083 kernel: raid6: neonx8 gen() 13809 MB/s Dec 13 14:05:35.971073 kernel: raid6: neonx8 xor() 10825 MB/s Dec 13 14:05:35.988074 kernel: raid6: neonx4 gen() 13532 MB/s Dec 13 14:05:36.005078 kernel: raid6: neonx4 xor() 11143 MB/s Dec 13 14:05:36.022073 kernel: raid6: neonx2 gen() 12953 MB/s Dec 13 14:05:36.039072 kernel: raid6: neonx2 xor() 10464 MB/s Dec 13 14:05:36.056078 kernel: raid6: neonx1 gen() 10513 MB/s Dec 13 14:05:36.073084 kernel: raid6: neonx1 xor() 8762 MB/s Dec 13 14:05:36.090069 kernel: raid6: int64x8 gen() 6260 MB/s Dec 13 14:05:36.107086 kernel: raid6: int64x8 xor() 3540 MB/s Dec 13 14:05:36.124089 kernel: raid6: int64x4 gen() 7226 MB/s Dec 13 14:05:36.141088 kernel: raid6: int64x4 xor() 3853 MB/s Dec 13 14:05:36.158080 kernel: raid6: int64x2 gen() 6147 MB/s Dec 13 14:05:36.175088 kernel: raid6: int64x2 xor() 3315 MB/s Dec 13 14:05:36.192083 kernel: raid6: int64x1 gen() 5041 MB/s Dec 13 14:05:36.209144 kernel: raid6: int64x1 xor() 2644 MB/s Dec 13 14:05:36.209155 kernel: raid6: using algorithm neonx8 gen() 13809 MB/s Dec 13 14:05:36.209163 kernel: raid6: .... xor() 10825 MB/s, rmw enabled Dec 13 14:05:36.210211 kernel: raid6: using neon recovery algorithm Dec 13 14:05:36.221427 kernel: xor: measuring software checksum speed Dec 13 14:05:36.221441 kernel: 8regs : 17209 MB/sec Dec 13 14:05:36.221449 kernel: 32regs : 20697 MB/sec Dec 13 14:05:36.222644 kernel: arm64_neon : 27710 MB/sec Dec 13 14:05:36.222658 kernel: xor: using function: arm64_neon (27710 MB/sec) Dec 13 14:05:36.277081 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:05:36.286992 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:05:36.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.290000 audit: BPF prog-id=7 op=LOAD Dec 13 14:05:36.290000 audit: BPF prog-id=8 op=LOAD Dec 13 14:05:36.291081 kernel: audit: type=1130 audit(1734098736.287:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.291214 systemd[1]: Starting systemd-udevd.service... Dec 13 14:05:36.303431 systemd-udevd[489]: Using default interface naming scheme 'v252'. Dec 13 14:05:36.306908 systemd[1]: Started systemd-udevd.service. Dec 13 14:05:36.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.310919 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:05:36.321870 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Dec 13 14:05:36.347462 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:05:36.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.349026 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:05:36.382902 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:05:36.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.412469 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:05:36.418012 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:05:36.418026 kernel: GPT:9289727 != 19775487 Dec 13 14:05:36.418035 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:05:36.418044 kernel: GPT:9289727 != 19775487 Dec 13 14:05:36.418052 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:05:36.418077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:05:36.432090 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (545) Dec 13 14:05:36.432799 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:05:36.441244 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:05:36.442219 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:05:36.446416 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:05:36.449865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:05:36.451580 systemd[1]: Starting disk-uuid.service... Dec 13 14:05:36.457312 disk-uuid[559]: Primary Header is updated. Dec 13 14:05:36.457312 disk-uuid[559]: Secondary Entries is updated. Dec 13 14:05:36.457312 disk-uuid[559]: Secondary Header is updated. Dec 13 14:05:36.460440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:05:37.472911 disk-uuid[560]: The operation has completed successfully. Dec 13 14:05:37.474687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:05:37.497240 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:05:37.498291 systemd[1]: Finished disk-uuid.service. Dec 13 14:05:37.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.501819 systemd[1]: Starting verity-setup.service... Dec 13 14:05:37.516079 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:05:37.537046 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:05:37.539168 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:05:37.541142 systemd[1]: Finished verity-setup.service. Dec 13 14:05:37.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.587079 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:05:37.587441 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:05:37.588226 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:05:37.588885 systemd[1]: Starting ignition-setup.service... Dec 13 14:05:37.591140 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:05:37.597215 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:37.597245 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:05:37.597255 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:05:37.605166 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:05:37.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.612132 systemd[1]: Finished ignition-setup.service. Dec 13 14:05:37.613638 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:05:37.665892 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:05:37.668019 systemd[1]: Starting systemd-networkd.service... Dec 13 14:05:37.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.667000 audit: BPF prog-id=9 op=LOAD Dec 13 14:05:37.696655 systemd-networkd[737]: lo: Link UP Dec 13 14:05:37.696662 systemd-networkd[737]: lo: Gained carrier Dec 13 14:05:37.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.697039 systemd-networkd[737]: Enumeration completed Dec 13 14:05:37.697147 systemd[1]: Started systemd-networkd.service. Dec 13 14:05:37.699268 ignition[652]: Ignition 2.14.0 Dec 13 14:05:37.697235 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:05:37.699275 ignition[652]: Stage: fetch-offline Dec 13 14:05:37.698231 systemd-networkd[737]: eth0: Link UP Dec 13 14:05:37.699312 ignition[652]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:05:37.698234 systemd-networkd[737]: eth0: Gained carrier Dec 13 14:05:37.699321 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:05:37.698427 systemd[1]: Reached target network.target. Dec 13 14:05:37.699460 ignition[652]: parsed url from cmdline: "" Dec 13 14:05:37.700441 systemd[1]: Starting iscsiuio.service... Dec 13 14:05:37.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.699463 ignition[652]: no config URL provided Dec 13 14:05:37.709276 systemd[1]: Started iscsiuio.service. Dec 13 14:05:37.699468 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:05:37.710963 systemd[1]: Starting iscsid.service... Dec 13 14:05:37.714751 iscsid[743]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:05:37.714751 iscsid[743]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:05:37.714751 iscsid[743]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:05:37.714751 iscsid[743]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:05:37.714751 iscsid[743]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:05:37.714751 iscsid[743]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:05:37.714751 iscsid[743]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:05:37.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.699475 ignition[652]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:05:37.714139 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:05:37.699492 ignition[652]: op(1): [started] loading QEMU firmware config module Dec 13 14:05:37.716913 systemd[1]: Started iscsid.service. Dec 13 14:05:37.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.699497 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:05:37.721226 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:05:37.706549 ignition[652]: op(1): [finished] loading QEMU firmware config module Dec 13 14:05:37.730893 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:05:37.732887 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:05:37.734209 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:05:37.735817 systemd[1]: Reached target remote-fs.target. Dec 13 14:05:37.738231 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:05:37.745525 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:05:37.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.770831 ignition[652]: parsing config with SHA512: 8bf6a294b2c4b230ca632055ca3a05c22eabbe55d1dafd81a6a139b024ff116239da1e536ea934520a8fb6e28d23fcb1665909ca6d44fc228e9eb380171152b8 Dec 13 14:05:37.781015 unknown[652]: fetched base config from "system" Dec 13 14:05:37.781539 ignition[652]: fetch-offline: fetch-offline passed Dec 13 14:05:37.781027 unknown[652]: fetched user config from "qemu" Dec 13 14:05:37.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.781594 ignition[652]: Ignition finished successfully Dec 13 14:05:37.782629 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:05:37.784201 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:05:37.784894 systemd[1]: Starting ignition-kargs.service... Dec 13 14:05:37.793699 ignition[758]: Ignition 2.14.0 Dec 13 14:05:37.793715 ignition[758]: Stage: kargs Dec 13 14:05:37.793804 ignition[758]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:05:37.795899 systemd[1]: Finished ignition-kargs.service. Dec 13 14:05:37.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.793814 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:05:37.794640 ignition[758]: kargs: kargs passed Dec 13 14:05:37.798031 systemd[1]: Starting ignition-disks.service... Dec 13 14:05:37.794679 ignition[758]: Ignition finished successfully Dec 13 14:05:37.804595 ignition[764]: Ignition 2.14.0 Dec 13 14:05:37.804605 ignition[764]: Stage: disks Dec 13 14:05:37.804694 ignition[764]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:05:37.804704 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:05:37.805591 ignition[764]: disks: disks passed Dec 13 14:05:37.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.807469 systemd[1]: Finished ignition-disks.service. Dec 13 14:05:37.805631 ignition[764]: Ignition finished successfully Dec 13 14:05:37.809102 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:05:37.810319 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:05:37.811607 systemd[1]: Reached target local-fs.target. Dec 13 14:05:37.812843 systemd[1]: Reached target sysinit.target. Dec 13 14:05:37.814238 systemd[1]: Reached target basic.target. Dec 13 14:05:37.816243 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:05:37.826465 systemd-fsck[772]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:05:37.829708 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:05:37.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.831311 systemd[1]: Mounting sysroot.mount... Dec 13 14:05:37.836926 systemd[1]: Mounted sysroot.mount. Dec 13 14:05:37.838141 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:05:37.837689 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:05:37.840239 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:05:37.841113 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:05:37.841151 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:05:37.841174 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:05:37.842915 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:05:37.844760 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:05:37.849007 initrd-setup-root[782]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:05:37.853353 initrd-setup-root[790]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:05:37.857554 initrd-setup-root[798]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:05:37.861694 initrd-setup-root[806]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:05:37.887368 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:05:37.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.888927 systemd[1]: Starting ignition-mount.service... Dec 13 14:05:37.890253 systemd[1]: Starting sysroot-boot.service... Dec 13 14:05:37.894315 bash[823]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:05:37.901631 ignition[825]: INFO : Ignition 2.14.0 Dec 13 14:05:37.901631 ignition[825]: INFO : Stage: mount Dec 13 14:05:37.903810 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:05:37.903810 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:05:37.903810 ignition[825]: INFO : mount: mount passed Dec 13 14:05:37.903810 ignition[825]: INFO : Ignition finished successfully Dec 13 14:05:37.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.905249 systemd[1]: Finished ignition-mount.service. Dec 13 14:05:37.909109 systemd[1]: Finished sysroot-boot.service. Dec 13 14:05:37.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.548933 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:05:38.555492 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (833) Dec 13 14:05:38.555523 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:38.555532 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:05:38.556108 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:05:38.559440 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:05:38.560978 systemd[1]: Starting ignition-files.service... Dec 13 14:05:38.573690 ignition[853]: INFO : Ignition 2.14.0 Dec 13 14:05:38.573690 ignition[853]: INFO : Stage: files Dec 13 14:05:38.575229 ignition[853]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:05:38.575229 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:05:38.575229 ignition[853]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:05:38.578819 ignition[853]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:05:38.578819 ignition[853]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:05:38.582137 ignition[853]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:05:38.583533 ignition[853]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:05:38.583533 ignition[853]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:05:38.583533 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:05:38.583533 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:05:38.582941 unknown[853]: wrote ssh authorized keys file for user: core Dec 13 14:05:38.634635 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:05:38.789560 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:05:38.791568 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:05:38.791568 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 14:05:39.108200 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:05:39.165610 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:39.167426 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:05:39.409158 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:05:39.546282 systemd-networkd[737]: eth0: Gained IPv6LL Dec 13 14:05:39.619531 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:39.619531 ignition[853]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:05:39.623514 ignition[853]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:05:39.652645 ignition[853]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:05:39.654191 ignition[853]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:05:39.654191 ignition[853]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:05:39.654191 ignition[853]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:05:39.654191 ignition[853]: INFO : files: files passed Dec 13 14:05:39.654191 ignition[853]: INFO : Ignition finished successfully Dec 13 14:05:39.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.655070 systemd[1]: Finished ignition-files.service. Dec 13 14:05:39.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.657340 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:05:39.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.658893 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:05:39.668690 initrd-setup-root-after-ignition[877]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:05:39.659563 systemd[1]: Starting ignition-quench.service... Dec 13 14:05:39.671531 initrd-setup-root-after-ignition[880]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:05:39.663050 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:05:39.663145 systemd[1]: Finished ignition-quench.service. Dec 13 14:05:39.664153 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:05:39.665508 systemd[1]: Reached target ignition-complete.target. Dec 13 14:05:39.667656 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:05:39.679359 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:05:39.679457 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:05:39.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.681094 systemd[1]: Reached target initrd-fs.target. Dec 13 14:05:39.682388 systemd[1]: Reached target initrd.target. Dec 13 14:05:39.683705 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:05:39.684429 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:05:39.694270 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:05:39.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.695802 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:05:39.703591 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:05:39.704511 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:05:39.705451 systemd[1]: Stopped target timers.target. Dec 13 14:05:39.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.706736 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:05:39.706848 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:05:39.708241 systemd[1]: Stopped target initrd.target. Dec 13 14:05:39.709356 systemd[1]: Stopped target basic.target. Dec 13 14:05:39.710854 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:05:39.712268 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:05:39.713579 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:05:39.714937 systemd[1]: Stopped target remote-fs.target. Dec 13 14:05:39.716296 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:05:39.717716 systemd[1]: Stopped target sysinit.target. Dec 13 14:05:39.719074 systemd[1]: Stopped target local-fs.target. Dec 13 14:05:39.720322 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:05:39.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.721674 systemd[1]: Stopped target swap.target. Dec 13 14:05:39.722851 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:05:39.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.722968 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:05:39.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.724232 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:05:39.725439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:05:39.725546 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:05:39.726785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:05:39.726885 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:05:39.728339 systemd[1]: Stopped target paths.target. Dec 13 14:05:39.729442 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:05:39.734083 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:05:39.735099 systemd[1]: Stopped target slices.target. Dec 13 14:05:39.736628 systemd[1]: Stopped target sockets.target. Dec 13 14:05:39.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.738038 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:05:39.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.738198 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:05:39.743293 iscsid[743]: iscsid shutting down. Dec 13 14:05:39.739612 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:05:39.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.739709 systemd[1]: Stopped ignition-files.service. Dec 13 14:05:39.741710 systemd[1]: Stopping ignition-mount.service... Dec 13 14:05:39.742599 systemd[1]: Stopping iscsid.service... Dec 13 14:05:39.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.743698 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:05:39.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.743831 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:05:39.751844 ignition[893]: INFO : Ignition 2.14.0 Dec 13 14:05:39.751844 ignition[893]: INFO : Stage: umount Dec 13 14:05:39.751844 ignition[893]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:05:39.751844 ignition[893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:05:39.751844 ignition[893]: INFO : umount: umount passed Dec 13 14:05:39.751844 ignition[893]: INFO : Ignition finished successfully Dec 13 14:05:39.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.745900 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:05:39.747278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:05:39.747420 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:05:39.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.748875 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:05:39.748974 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:05:39.751537 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:05:39.751636 systemd[1]: Stopped iscsid.service. Dec 13 14:05:39.752985 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:05:39.753049 systemd[1]: Closed iscsid.socket. Dec 13 14:05:39.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.753990 systemd[1]: Stopping iscsiuio.service... Dec 13 14:05:39.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.756950 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:05:39.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.757406 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:05:39.757498 systemd[1]: Stopped iscsiuio.service. Dec 13 14:05:39.759301 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:05:39.759393 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:05:39.761325 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:05:39.761424 systemd[1]: Stopped ignition-mount.service. Dec 13 14:05:39.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.765797 systemd[1]: Stopped target network.target. Dec 13 14:05:39.767219 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:05:39.767256 systemd[1]: Closed iscsiuio.socket. Dec 13 14:05:39.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.769920 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:05:39.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.769966 systemd[1]: Stopped ignition-disks.service. Dec 13 14:05:39.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.772581 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:05:39.772623 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:05:39.773870 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:05:39.773908 systemd[1]: Stopped ignition-setup.service. Dec 13 14:05:39.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.776662 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:05:39.777766 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:05:39.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.805000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:05:39.781106 systemd-networkd[737]: eth0: DHCPv6 lease lost Dec 13 14:05:39.805000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:05:39.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.782510 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:05:39.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.782608 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:05:39.785263 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:05:39.785293 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:05:39.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.787919 systemd[1]: Stopping network-cleanup.service... Dec 13 14:05:39.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.789377 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:05:39.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.789452 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:05:39.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.790410 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:05:39.790457 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:05:39.792597 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:05:39.792643 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:05:39.796433 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:05:39.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.798583 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:05:39.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.799175 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:05:39.799267 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:05:39.802236 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:05:39.802324 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:05:39.804849 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:05:39.804960 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:05:39.806306 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:05:39.806387 systemd[1]: Stopped network-cleanup.service. Dec 13 14:05:39.807575 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:05:39.807607 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:05:39.808935 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:05:39.808964 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:05:39.810256 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:05:39.810300 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:05:39.811735 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:05:39.811773 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:05:39.813047 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:05:39.813102 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:05:39.814417 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:05:39.814456 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:05:39.816571 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:05:39.817427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:05:39.817478 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:05:39.821622 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:05:39.821702 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:05:39.823028 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:05:39.825434 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:05:39.831364 systemd[1]: Switching root. Dec 13 14:05:39.850546 systemd-journald[289]: Journal stopped Dec 13 14:05:41.857479 systemd-journald[289]: Received SIGTERM from PID 1 (n/a). Dec 13 14:05:41.857534 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:05:41.857546 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:05:41.857558 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:05:41.857568 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:05:41.857578 kernel: SELinux: policy capability open_perms=1 Dec 13 14:05:41.857589 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:05:41.857602 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:05:41.857612 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:05:41.857625 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:05:41.857634 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:05:41.857643 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:05:41.857653 systemd[1]: Successfully loaded SELinux policy in 33.324ms. Dec 13 14:05:41.857669 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.929ms. Dec 13 14:05:41.857681 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:05:41.857696 systemd[1]: Detected virtualization kvm. Dec 13 14:05:41.857706 systemd[1]: Detected architecture arm64. Dec 13 14:05:41.857716 systemd[1]: Detected first boot. Dec 13 14:05:41.857727 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:05:41.857739 kernel: kauditd_printk_skb: 65 callbacks suppressed Dec 13 14:05:41.857750 kernel: audit: type=1400 audit(1734098740.016:76): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:05:41.857762 kernel: audit: type=1400 audit(1734098740.016:77): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:05:41.857772 kernel: audit: type=1334 audit(1734098740.017:78): prog-id=10 op=LOAD Dec 13 14:05:41.857781 kernel: audit: type=1334 audit(1734098740.017:79): prog-id=10 op=UNLOAD Dec 13 14:05:41.857792 kernel: audit: type=1334 audit(1734098740.019:80): prog-id=11 op=LOAD Dec 13 14:05:41.857801 kernel: audit: type=1334 audit(1734098740.019:81): prog-id=11 op=UNLOAD Dec 13 14:05:41.857811 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:05:41.857822 kernel: audit: type=1400 audit(1734098740.058:82): avc: denied { associate } for pid=926 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:05:41.857833 kernel: audit: type=1300 audit(1734098740.058:82): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58ac a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:41.857844 kernel: audit: type=1327 audit(1734098740.058:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:05:41.857854 kernel: audit: type=1400 audit(1734098740.059:83): avc: denied { associate } for pid=926 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:05:41.857866 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:05:41.857878 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:05:41.857889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:05:41.857900 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:05:41.857916 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:05:41.857926 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:05:41.857936 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:05:41.857946 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:05:41.857957 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:05:41.857967 systemd[1]: Created slice system-getty.slice. Dec 13 14:05:41.857980 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:05:41.857990 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:05:41.858000 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:05:41.858011 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:05:41.858023 systemd[1]: Created slice user.slice. Dec 13 14:05:41.858033 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:05:41.858043 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:05:41.858063 systemd[1]: Set up automount boot.automount. Dec 13 14:05:41.858077 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:05:41.858090 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:05:41.858100 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:05:41.858110 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:05:41.858121 systemd[1]: Reached target integritysetup.target. Dec 13 14:05:41.858131 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:05:41.858141 systemd[1]: Reached target remote-fs.target. Dec 13 14:05:41.858151 systemd[1]: Reached target slices.target. Dec 13 14:05:41.858162 systemd[1]: Reached target swap.target. Dec 13 14:05:41.858179 systemd[1]: Reached target torcx.target. Dec 13 14:05:41.858191 systemd[1]: Reached target veritysetup.target. Dec 13 14:05:41.858202 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:05:41.858212 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:05:41.858224 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:05:41.858234 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:05:41.858245 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:05:41.858256 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:05:41.858268 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:05:41.858279 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:05:41.858289 systemd[1]: Mounting media.mount... Dec 13 14:05:41.858300 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:05:41.858310 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:05:41.858320 systemd[1]: Mounting tmp.mount... Dec 13 14:05:41.858331 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:05:41.858342 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:05:41.858352 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:05:41.858363 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:05:41.858373 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:05:41.858383 systemd[1]: Starting modprobe@drm.service... Dec 13 14:05:41.858405 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:05:41.858416 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:05:41.858427 systemd[1]: Starting modprobe@loop.service... Dec 13 14:05:41.858437 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:05:41.858448 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:05:41.858458 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:05:41.858469 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:05:41.858479 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:05:41.858489 systemd[1]: Stopped systemd-journald.service. Dec 13 14:05:41.858501 kernel: loop: module loaded Dec 13 14:05:41.858512 systemd[1]: Starting systemd-journald.service... Dec 13 14:05:41.858522 kernel: fuse: init (API version 7.34) Dec 13 14:05:41.858532 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:05:41.858542 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:05:41.858552 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:05:41.858563 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:05:41.858573 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:05:41.858583 systemd[1]: Stopped verity-setup.service. Dec 13 14:05:41.858595 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:05:41.858605 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:05:41.858616 systemd[1]: Mounted media.mount. Dec 13 14:05:41.858626 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:05:41.858636 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:05:41.858648 systemd-journald[995]: Journal started Dec 13 14:05:41.858687 systemd-journald[995]: Runtime Journal (/run/log/journal/aa15876634f447f29713f494f92485fe) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:05:39.912000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:05:40.016000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:05:40.016000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:05:40.017000 audit: BPF prog-id=10 op=LOAD Dec 13 14:05:40.017000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:05:40.019000 audit: BPF prog-id=11 op=LOAD Dec 13 14:05:40.019000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:05:40.058000 audit[926]: AVC avc: denied { associate } for pid=926 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:05:40.058000 audit[926]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58ac a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:40.058000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:05:40.059000 audit[926]: AVC avc: denied { associate } for pid=926 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:05:40.059000 audit[926]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5985 a2=1ed a3=0 items=2 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:40.059000 audit: CWD cwd="/" Dec 13 14:05:40.059000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:05:40.059000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:05:40.059000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:05:41.729000 audit: BPF prog-id=12 op=LOAD Dec 13 14:05:41.729000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:05:41.729000 audit: BPF prog-id=13 op=LOAD Dec 13 14:05:41.729000 audit: BPF prog-id=14 op=LOAD Dec 13 14:05:41.729000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:05:41.729000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:05:41.730000 audit: BPF prog-id=15 op=LOAD Dec 13 14:05:41.730000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:05:41.730000 audit: BPF prog-id=16 op=LOAD Dec 13 14:05:41.730000 audit: BPF prog-id=17 op=LOAD Dec 13 14:05:41.730000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:05:41.730000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:05:41.730000 audit: BPF prog-id=18 op=LOAD Dec 13 14:05:41.731000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:05:41.731000 audit: BPF prog-id=19 op=LOAD Dec 13 14:05:41.731000 audit: BPF prog-id=20 op=LOAD Dec 13 14:05:41.731000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:05:41.731000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:05:41.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.742000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:05:41.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.833000 audit: BPF prog-id=21 op=LOAD Dec 13 14:05:41.833000 audit: BPF prog-id=22 op=LOAD Dec 13 14:05:41.833000 audit: BPF prog-id=23 op=LOAD Dec 13 14:05:41.833000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:05:41.833000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:05:41.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.856000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:05:41.856000 audit[995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe887b5d0 a2=4000 a3=1 items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:41.856000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:05:41.727989 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:05:40.057021 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:05:41.728000 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:05:40.057325 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:05:41.731477 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:05:40.057345 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:05:40.057374 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:05:40.057384 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:05:40.057420 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:05:40.057432 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:05:40.057621 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:05:40.057654 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:05:40.057665 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:05:40.058125 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:05:40.058161 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:05:40.058178 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:05:40.058192 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:05:40.058208 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:05:40.058221 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:05:41.861125 systemd[1]: Started systemd-journald.service. Dec 13 14:05:41.484297 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:05:41.484571 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:05:41.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.484678 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:05:41.484840 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:05:41.484888 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:05:41.484952 /usr/lib/systemd/system-generators/torcx-generator[926]: time="2024-12-13T14:05:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:05:41.861689 systemd[1]: Mounted tmp.mount. Dec 13 14:05:41.862655 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:05:41.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.863732 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:05:41.863877 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:05:41.864944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:05:41.865106 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:05:41.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.866194 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:05:41.866348 systemd[1]: Finished modprobe@drm.service. Dec 13 14:05:41.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.867355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:05:41.867519 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:05:41.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.868690 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:05:41.868854 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:05:41.869988 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:05:41.870256 systemd[1]: Finished modprobe@loop.service. Dec 13 14:05:41.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.871371 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:05:41.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.872562 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:05:41.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.873729 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:05:41.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.874913 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:05:41.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.876340 systemd[1]: Reached target network-pre.target. Dec 13 14:05:41.878223 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:05:41.880003 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:05:41.880891 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:05:41.882276 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:05:41.884356 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:05:41.885375 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:05:41.886480 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:05:41.887335 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:05:41.888373 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:05:41.891483 systemd-journald[995]: Time spent on flushing to /var/log/journal/aa15876634f447f29713f494f92485fe is 13.427ms for 1003 entries. Dec 13 14:05:41.891483 systemd-journald[995]: System Journal (/var/log/journal/aa15876634f447f29713f494f92485fe) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:05:41.923986 systemd-journald[995]: Received client request to flush runtime journal. Dec 13 14:05:41.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:41.892479 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:05:41.896231 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:05:41.925311 udevadm[1027]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:05:41.897231 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:05:41.898363 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:05:41.899368 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:05:41.900522 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:05:41.902548 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:05:41.903627 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:05:41.919940 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:05:41.924961 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:05:41.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.239635 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:05:42.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.241000 audit: BPF prog-id=24 op=LOAD Dec 13 14:05:42.241000 audit: BPF prog-id=25 op=LOAD Dec 13 14:05:42.241000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:05:42.241000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:05:42.241933 systemd[1]: Starting systemd-udevd.service... Dec 13 14:05:42.259034 systemd-udevd[1029]: Using default interface naming scheme 'v252'. Dec 13 14:05:42.275740 systemd[1]: Started systemd-udevd.service. Dec 13 14:05:42.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.279000 audit: BPF prog-id=26 op=LOAD Dec 13 14:05:42.280025 systemd[1]: Starting systemd-networkd.service... Dec 13 14:05:42.294000 audit: BPF prog-id=27 op=LOAD Dec 13 14:05:42.295000 audit: BPF prog-id=28 op=LOAD Dec 13 14:05:42.295000 audit: BPF prog-id=29 op=LOAD Dec 13 14:05:42.297841 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:05:42.306615 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Dec 13 14:05:42.333467 systemd[1]: Started systemd-userdbd.service. Dec 13 14:05:42.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.346641 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:05:42.384235 systemd-networkd[1032]: lo: Link UP Dec 13 14:05:42.384248 systemd-networkd[1032]: lo: Gained carrier Dec 13 14:05:42.384607 systemd-networkd[1032]: Enumeration completed Dec 13 14:05:42.384692 systemd[1]: Started systemd-networkd.service. Dec 13 14:05:42.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.385765 systemd-networkd[1032]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:05:42.386861 systemd-networkd[1032]: eth0: Link UP Dec 13 14:05:42.386872 systemd-networkd[1032]: eth0: Gained carrier Dec 13 14:05:42.390431 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:05:42.392462 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:05:42.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.407260 lvm[1062]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:05:42.413312 systemd-networkd[1032]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:05:42.431807 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:05:42.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.432840 systemd[1]: Reached target cryptsetup.target. Dec 13 14:05:42.434780 systemd[1]: Starting lvm2-activation.service... Dec 13 14:05:42.437913 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:05:42.471781 systemd[1]: Finished lvm2-activation.service. Dec 13 14:05:42.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.472754 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:05:42.473645 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:05:42.473674 systemd[1]: Reached target local-fs.target. Dec 13 14:05:42.474497 systemd[1]: Reached target machines.target. Dec 13 14:05:42.476559 systemd[1]: Starting ldconfig.service... Dec 13 14:05:42.477582 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:05:42.477646 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:05:42.479372 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:05:42.481210 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:05:42.483959 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:05:42.486086 systemd[1]: Starting systemd-sysext.service... Dec 13 14:05:42.487149 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1065 (bootctl) Dec 13 14:05:42.488201 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:05:42.497132 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:05:42.498651 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:05:42.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.501047 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:05:42.501242 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:05:42.515098 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:05:42.554738 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:05:42.555403 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:05:42.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.560479 systemd-fsck[1075]: fsck.fat 4.2 (2021-01-31) Dec 13 14:05:42.560479 systemd-fsck[1075]: /dev/vda1: 236 files, 117175/258078 clusters Dec 13 14:05:42.561295 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:05:42.562304 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:05:42.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.565970 systemd[1]: Mounting boot.mount... Dec 13 14:05:42.574019 systemd[1]: Mounted boot.mount. Dec 13 14:05:42.576164 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:05:42.581731 (sd-sysext)[1080]: Using extensions 'kubernetes'. Dec 13 14:05:42.582085 (sd-sysext)[1080]: Merged extensions into '/usr'. Dec 13 14:05:42.585115 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:05:42.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.603706 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:05:42.605526 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:05:42.607664 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:05:42.610225 systemd[1]: Starting modprobe@loop.service... Dec 13 14:05:42.611032 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:05:42.611199 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:05:42.612137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:05:42.612267 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:05:42.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.613595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:05:42.613714 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:05:42.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.615486 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:05:42.615693 systemd[1]: Finished modprobe@loop.service. Dec 13 14:05:42.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.617326 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:05:42.617475 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:05:42.652981 ldconfig[1064]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:05:42.658123 systemd[1]: Finished ldconfig.service. Dec 13 14:05:42.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.852981 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:05:42.857964 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:05:42.859795 systemd[1]: Finished systemd-sysext.service. Dec 13 14:05:42.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:42.861724 systemd[1]: Starting ensure-sysext.service... Dec 13 14:05:42.863403 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:05:42.867321 systemd[1]: Reloading. Dec 13 14:05:42.874952 systemd-tmpfiles[1087]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:05:42.876988 systemd-tmpfiles[1087]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:05:42.879773 systemd-tmpfiles[1087]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:05:42.905599 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2024-12-13T14:05:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:05:42.905625 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2024-12-13T14:05:42Z" level=info msg="torcx already run" Dec 13 14:05:42.960260 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:05:42.960278 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:05:42.975617 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:05:43.016000 audit: BPF prog-id=30 op=LOAD Dec 13 14:05:43.016000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:05:43.016000 audit: BPF prog-id=31 op=LOAD Dec 13 14:05:43.016000 audit: BPF prog-id=32 op=LOAD Dec 13 14:05:43.016000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:05:43.016000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:05:43.017000 audit: BPF prog-id=33 op=LOAD Dec 13 14:05:43.017000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:05:43.017000 audit: BPF prog-id=34 op=LOAD Dec 13 14:05:43.017000 audit: BPF prog-id=35 op=LOAD Dec 13 14:05:43.017000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:05:43.017000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:05:43.018000 audit: BPF prog-id=36 op=LOAD Dec 13 14:05:43.018000 audit: BPF prog-id=37 op=LOAD Dec 13 14:05:43.018000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:05:43.018000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:05:43.019000 audit: BPF prog-id=38 op=LOAD Dec 13 14:05:43.019000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:05:43.022033 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:05:43.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.026247 systemd[1]: Starting audit-rules.service... Dec 13 14:05:43.028031 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:05:43.034000 audit: BPF prog-id=39 op=LOAD Dec 13 14:05:43.030297 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:05:43.035275 systemd[1]: Starting systemd-resolved.service... Dec 13 14:05:43.036000 audit: BPF prog-id=40 op=LOAD Dec 13 14:05:43.037899 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:05:43.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.039998 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:05:43.041499 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:05:43.044281 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:05:43.045731 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.045000 audit[1153]: SYSTEM_BOOT pid=1153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.047271 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:05:43.049190 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:05:43.051222 systemd[1]: Starting modprobe@loop.service... Dec 13 14:05:43.051983 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.052113 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:05:43.052208 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:05:43.053033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:05:43.053161 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:05:43.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.054428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:05:43.054540 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:05:43.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.055797 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:05:43.055907 systemd[1]: Finished modprobe@loop.service. Dec 13 14:05:43.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.059755 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.060934 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:05:43.062960 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:05:43.064885 systemd[1]: Starting modprobe@loop.service... Dec 13 14:05:43.065672 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.065829 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:05:43.065971 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:05:43.067302 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:05:43.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.068849 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:05:43.068973 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:05:43.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.070279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:05:43.070386 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:05:43.071677 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:05:43.071777 systemd[1]: Finished modprobe@loop.service. Dec 13 14:05:43.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.073130 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:05:43.073256 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.074448 systemd[1]: Starting systemd-update-done.service... Dec 13 14:05:43.076029 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:05:43.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.079981 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.081213 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:05:43.082925 systemd[1]: Starting modprobe@drm.service... Dec 13 14:05:43.084809 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:05:43.087120 systemd[1]: Starting modprobe@loop.service... Dec 13 14:05:43.088094 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.088225 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:05:43.089545 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:05:43.090503 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:05:43.091968 systemd[1]: Finished systemd-update-done.service. Dec 13 14:05:43.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:43.093000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:05:43.093000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffec42a0d0 a2=420 a3=0 items=0 ppid=1146 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:43.093000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:05:43.093419 augenrules[1177]: No rules Dec 13 14:05:43.093428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:05:43.093537 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:05:43.094791 systemd[1]: Finished audit-rules.service. Dec 13 14:05:43.095847 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:05:43.095950 systemd[1]: Finished modprobe@drm.service. Dec 13 14:05:43.097170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:05:43.097270 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:05:43.098554 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:05:43.098736 systemd[1]: Finished modprobe@loop.service. Dec 13 14:05:43.100182 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:05:43.100252 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.101173 systemd[1]: Finished ensure-sysext.service. Dec 13 14:05:43.103908 systemd-resolved[1150]: Positive Trust Anchors: Dec 13 14:05:43.103918 systemd-resolved[1150]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:05:43.103945 systemd-resolved[1150]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:05:43.108394 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:05:43.535254 systemd-timesyncd[1151]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:05:43.535529 systemd-timesyncd[1151]: Initial clock synchronization to Fri 2024-12-13 14:05:43.535182 UTC. Dec 13 14:05:43.535872 systemd[1]: Reached target time-set.target. Dec 13 14:05:43.538557 systemd-resolved[1150]: Defaulting to hostname 'linux'. Dec 13 14:05:43.539894 systemd[1]: Started systemd-resolved.service. Dec 13 14:05:43.540716 systemd[1]: Reached target network.target. Dec 13 14:05:43.541490 systemd[1]: Reached target nss-lookup.target. Dec 13 14:05:43.542256 systemd[1]: Reached target sysinit.target. Dec 13 14:05:43.543046 systemd[1]: Started motdgen.path. Dec 13 14:05:43.543760 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:05:43.545006 systemd[1]: Started logrotate.timer. Dec 13 14:05:43.545852 systemd[1]: Started mdadm.timer. Dec 13 14:05:43.546520 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:05:43.547310 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:05:43.547342 systemd[1]: Reached target paths.target. Dec 13 14:05:43.548041 systemd[1]: Reached target timers.target. Dec 13 14:05:43.549062 systemd[1]: Listening on dbus.socket. Dec 13 14:05:43.550754 systemd[1]: Starting docker.socket... Dec 13 14:05:43.553669 systemd[1]: Listening on sshd.socket. Dec 13 14:05:43.554474 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:05:43.554875 systemd[1]: Listening on docker.socket. Dec 13 14:05:43.555778 systemd[1]: Reached target sockets.target. Dec 13 14:05:43.556580 systemd[1]: Reached target basic.target. Dec 13 14:05:43.557360 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.557391 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:05:43.558333 systemd[1]: Starting containerd.service... Dec 13 14:05:43.560067 systemd[1]: Starting dbus.service... Dec 13 14:05:43.561861 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:05:43.564064 systemd[1]: Starting extend-filesystems.service... Dec 13 14:05:43.564999 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:05:43.567251 systemd[1]: Starting motdgen.service... Dec 13 14:05:43.569036 systemd[1]: Starting prepare-helm.service... Dec 13 14:05:43.572436 jq[1189]: false Dec 13 14:05:43.570932 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:05:43.572953 systemd[1]: Starting sshd-keygen.service... Dec 13 14:05:43.576168 systemd[1]: Starting systemd-logind.service... Dec 13 14:05:43.576953 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:05:43.577028 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:05:43.577423 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:05:43.578327 systemd[1]: Starting update-engine.service... Dec 13 14:05:43.580988 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:05:43.583424 extend-filesystems[1190]: Found loop1 Dec 13 14:05:43.584337 jq[1205]: true Dec 13 14:05:43.584601 extend-filesystems[1190]: Found vda Dec 13 14:05:43.584919 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:05:43.585082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:05:43.585607 extend-filesystems[1190]: Found vda1 Dec 13 14:05:43.586078 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:05:43.586273 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:05:43.586685 extend-filesystems[1190]: Found vda2 Dec 13 14:05:43.589498 extend-filesystems[1190]: Found vda3 Dec 13 14:05:43.590319 extend-filesystems[1190]: Found usr Dec 13 14:05:43.591131 extend-filesystems[1190]: Found vda4 Dec 13 14:05:43.591956 extend-filesystems[1190]: Found vda6 Dec 13 14:05:43.593395 extend-filesystems[1190]: Found vda7 Dec 13 14:05:43.593395 extend-filesystems[1190]: Found vda9 Dec 13 14:05:43.593395 extend-filesystems[1190]: Checking size of /dev/vda9 Dec 13 14:05:43.599861 jq[1211]: true Dec 13 14:05:43.599044 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:05:43.599221 systemd[1]: Finished motdgen.service. Dec 13 14:05:43.603337 tar[1210]: linux-arm64/helm Dec 13 14:05:43.609653 dbus-daemon[1188]: [system] SELinux support is enabled Dec 13 14:05:43.609870 systemd[1]: Started dbus.service. Dec 13 14:05:43.612363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:05:43.612389 systemd[1]: Reached target system-config.target. Dec 13 14:05:43.613470 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:05:43.613490 systemd[1]: Reached target user-config.target. Dec 13 14:05:43.631966 extend-filesystems[1190]: Resized partition /dev/vda9 Dec 13 14:05:43.637214 extend-filesystems[1240]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:05:43.645980 systemd-logind[1199]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:05:43.647224 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:05:43.660532 systemd-logind[1199]: New seat seat0. Dec 13 14:05:43.665659 systemd[1]: Started systemd-logind.service. Dec 13 14:05:43.678503 update_engine[1202]: I1213 14:05:43.676497 1202 main.cc:92] Flatcar Update Engine starting Dec 13 14:05:43.681595 systemd[1]: Started update-engine.service. Dec 13 14:05:43.685787 systemd[1]: Started locksmithd.service. Dec 13 14:05:43.687169 update_engine[1202]: I1213 14:05:43.687099 1202 update_check_scheduler.cc:74] Next update check in 4m27s Dec 13 14:05:43.688975 bash[1229]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:05:43.689148 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:05:43.691201 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:05:43.707081 extend-filesystems[1240]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:05:43.707081 extend-filesystems[1240]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:05:43.707081 extend-filesystems[1240]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:05:43.711258 extend-filesystems[1190]: Resized filesystem in /dev/vda9 Dec 13 14:05:43.712860 env[1212]: time="2024-12-13T14:05:43.712803618Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:05:43.713580 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:05:43.713732 systemd[1]: Finished extend-filesystems.service. Dec 13 14:05:43.736056 env[1212]: time="2024-12-13T14:05:43.736011538Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:05:43.736245 env[1212]: time="2024-12-13T14:05:43.736181538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:05:43.737325 env[1212]: time="2024-12-13T14:05:43.737291978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:05:43.737325 env[1212]: time="2024-12-13T14:05:43.737322258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:05:43.737563 env[1212]: time="2024-12-13T14:05:43.737526058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:05:43.737563 env[1212]: time="2024-12-13T14:05:43.737548538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:05:43.737563 env[1212]: time="2024-12-13T14:05:43.737561858Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:05:43.737641 env[1212]: time="2024-12-13T14:05:43.737571898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:05:43.737664 env[1212]: time="2024-12-13T14:05:43.737642218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:05:43.737869 env[1212]: time="2024-12-13T14:05:43.737840258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:05:43.737978 env[1212]: time="2024-12-13T14:05:43.737959978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:05:43.738007 env[1212]: time="2024-12-13T14:05:43.737978298Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:05:43.738043 env[1212]: time="2024-12-13T14:05:43.738028378Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:05:43.738080 env[1212]: time="2024-12-13T14:05:43.738043698Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.741570498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.741598818Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.741611698Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.741639738Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.741653098Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.741666258Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.741677978Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.741986058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.742001698Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.742014258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.742025698Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.742037218Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.742142618Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:05:43.742328 env[1212]: time="2024-12-13T14:05:43.742229898Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742419818Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742447458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742460058Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742564938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742577938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742589658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742600858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742613618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742625258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742636578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742647178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742691 env[1212]: time="2024-12-13T14:05:43.742659018Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:05:43.742908 env[1212]: time="2024-12-13T14:05:43.742765298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742908 env[1212]: time="2024-12-13T14:05:43.742780498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742908 env[1212]: time="2024-12-13T14:05:43.742791578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.742908 env[1212]: time="2024-12-13T14:05:43.742802218Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:05:43.742908 env[1212]: time="2024-12-13T14:05:43.742815258Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:05:43.742908 env[1212]: time="2024-12-13T14:05:43.742825458Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:05:43.742908 env[1212]: time="2024-12-13T14:05:43.742840338Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:05:43.742908 env[1212]: time="2024-12-13T14:05:43.742870418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:05:43.743134 env[1212]: time="2024-12-13T14:05:43.743052298Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:05:43.743134 env[1212]: time="2024-12-13T14:05:43.743120378Z" level=info msg="Connect containerd service" Dec 13 14:05:43.743793 env[1212]: time="2024-12-13T14:05:43.743151378Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:05:43.744003 env[1212]: time="2024-12-13T14:05:43.743977858Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.744297898Z" level=info msg="Start subscribing containerd event" Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.744331098Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.744354818Z" level=info msg="Start recovering state" Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.744380538Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.744413618Z" level=info msg="Start event monitor" Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.744430138Z" level=info msg="Start snapshots syncer" Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.744439938Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.744447658Z" level=info msg="Start streaming server" Dec 13 14:05:43.747196 env[1212]: time="2024-12-13T14:05:43.745552698Z" level=info msg="containerd successfully booted in 0.052631s" Dec 13 14:05:43.744601 systemd[1]: Started containerd.service. Dec 13 14:05:43.794103 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:05:44.038589 tar[1210]: linux-arm64/LICENSE Dec 13 14:05:44.038706 tar[1210]: linux-arm64/README.md Dec 13 14:05:44.043063 systemd[1]: Finished prepare-helm.service. Dec 13 14:05:44.195303 systemd-networkd[1032]: eth0: Gained IPv6LL Dec 13 14:05:44.197626 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:05:44.198917 systemd[1]: Reached target network-online.target. Dec 13 14:05:44.201229 systemd[1]: Starting kubelet.service... Dec 13 14:05:44.706321 systemd[1]: Started kubelet.service. Dec 13 14:05:45.194680 kubelet[1256]: E1213 14:05:45.194613 1256 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:05:45.196747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:05:45.196876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:05:45.596639 sshd_keygen[1208]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:05:45.614362 systemd[1]: Finished sshd-keygen.service. Dec 13 14:05:45.616587 systemd[1]: Starting issuegen.service... Dec 13 14:05:45.621068 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:05:45.621222 systemd[1]: Finished issuegen.service. Dec 13 14:05:45.623270 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:05:45.631398 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:05:45.633404 systemd[1]: Started getty@tty1.service. Dec 13 14:05:45.635242 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:05:45.636248 systemd[1]: Reached target getty.target. Dec 13 14:05:45.637070 systemd[1]: Reached target multi-user.target. Dec 13 14:05:45.638990 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:05:45.645044 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:05:45.645198 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:05:45.646292 systemd[1]: Startup finished in 601ms (kernel) + 4.299s (initrd) + 5.349s (userspace) = 10.250s. Dec 13 14:05:49.013154 systemd[1]: Created slice system-sshd.slice. Dec 13 14:05:49.014244 systemd[1]: Started sshd@0-10.0.0.68:22-10.0.0.1:47982.service. Dec 13 14:05:49.098054 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 47982 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:05:49.100010 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:05:49.107614 systemd[1]: Created slice user-500.slice. Dec 13 14:05:49.108660 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:05:49.110392 systemd-logind[1199]: New session 1 of user core. Dec 13 14:05:49.116451 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:05:49.117640 systemd[1]: Starting user@500.service... Dec 13 14:05:49.120451 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:05:49.177272 systemd[1282]: Queued start job for default target default.target. Dec 13 14:05:49.177720 systemd[1282]: Reached target paths.target. Dec 13 14:05:49.177740 systemd[1282]: Reached target sockets.target. Dec 13 14:05:49.177751 systemd[1282]: Reached target timers.target. Dec 13 14:05:49.177761 systemd[1282]: Reached target basic.target. Dec 13 14:05:49.177812 systemd[1282]: Reached target default.target. Dec 13 14:05:49.177838 systemd[1282]: Startup finished in 52ms. Dec 13 14:05:49.177899 systemd[1]: Started user@500.service. Dec 13 14:05:49.178783 systemd[1]: Started session-1.scope. Dec 13 14:05:49.230047 systemd[1]: Started sshd@1-10.0.0.68:22-10.0.0.1:47988.service. Dec 13 14:05:49.267197 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 47988 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:05:49.268690 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:05:49.275049 systemd[1]: Started session-2.scope. Dec 13 14:05:49.275328 systemd-logind[1199]: New session 2 of user core. Dec 13 14:05:49.336856 sshd[1291]: pam_unix(sshd:session): session closed for user core Dec 13 14:05:49.339713 systemd[1]: Started sshd@2-10.0.0.68:22-10.0.0.1:47998.service. Dec 13 14:05:49.340118 systemd[1]: sshd@1-10.0.0.68:22-10.0.0.1:47988.service: Deactivated successfully. Dec 13 14:05:49.341110 systemd-logind[1199]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:05:49.341161 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:05:49.341828 systemd-logind[1199]: Removed session 2. Dec 13 14:05:49.376358 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 47998 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:05:49.377428 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:05:49.381196 systemd[1]: Started session-3.scope. Dec 13 14:05:49.381734 systemd-logind[1199]: New session 3 of user core. Dec 13 14:05:49.431140 sshd[1296]: pam_unix(sshd:session): session closed for user core Dec 13 14:05:49.434940 systemd[1]: sshd@2-10.0.0.68:22-10.0.0.1:47998.service: Deactivated successfully. Dec 13 14:05:49.435531 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:05:49.436108 systemd-logind[1199]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:05:49.437145 systemd[1]: Started sshd@3-10.0.0.68:22-10.0.0.1:48004.service. Dec 13 14:05:49.437813 systemd-logind[1199]: Removed session 3. Dec 13 14:05:49.473641 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 48004 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:05:49.475026 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:05:49.478062 systemd-logind[1199]: New session 4 of user core. Dec 13 14:05:49.478864 systemd[1]: Started session-4.scope. Dec 13 14:05:49.531320 sshd[1303]: pam_unix(sshd:session): session closed for user core Dec 13 14:05:49.534986 systemd[1]: Started sshd@4-10.0.0.68:22-10.0.0.1:48016.service. Dec 13 14:05:49.535514 systemd[1]: sshd@3-10.0.0.68:22-10.0.0.1:48004.service: Deactivated successfully. Dec 13 14:05:49.536108 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:05:49.536668 systemd-logind[1199]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:05:49.537723 systemd-logind[1199]: Removed session 4. Dec 13 14:05:49.570656 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 48016 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:05:49.572059 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:05:49.575031 systemd-logind[1199]: New session 5 of user core. Dec 13 14:05:49.575805 systemd[1]: Started session-5.scope. Dec 13 14:05:49.635657 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:05:49.637700 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:05:49.699637 systemd[1]: Starting docker.service... Dec 13 14:05:49.786349 env[1325]: time="2024-12-13T14:05:49.786232098Z" level=info msg="Starting up" Dec 13 14:05:49.792604 env[1325]: time="2024-12-13T14:05:49.792564258Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:05:49.792604 env[1325]: time="2024-12-13T14:05:49.792588618Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:05:49.792698 env[1325]: time="2024-12-13T14:05:49.792608698Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:05:49.792698 env[1325]: time="2024-12-13T14:05:49.792619178Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:05:49.794844 env[1325]: time="2024-12-13T14:05:49.794812698Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:05:49.794844 env[1325]: time="2024-12-13T14:05:49.794841418Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:05:49.794948 env[1325]: time="2024-12-13T14:05:49.794864378Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:05:49.794948 env[1325]: time="2024-12-13T14:05:49.794876018Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:05:49.799671 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3322018286-merged.mount: Deactivated successfully. Dec 13 14:05:49.941066 env[1325]: time="2024-12-13T14:05:49.941012658Z" level=info msg="Loading containers: start." Dec 13 14:05:50.048207 kernel: Initializing XFRM netlink socket Dec 13 14:05:50.070521 env[1325]: time="2024-12-13T14:05:50.070473218Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:05:50.123756 systemd-networkd[1032]: docker0: Link UP Dec 13 14:05:50.139342 env[1325]: time="2024-12-13T14:05:50.139306338Z" level=info msg="Loading containers: done." Dec 13 14:05:50.156646 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1150057320-merged.mount: Deactivated successfully. Dec 13 14:05:50.160777 env[1325]: time="2024-12-13T14:05:50.160735058Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:05:50.160946 env[1325]: time="2024-12-13T14:05:50.160917778Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:05:50.161036 env[1325]: time="2024-12-13T14:05:50.161012338Z" level=info msg="Daemon has completed initialization" Dec 13 14:05:50.174059 systemd[1]: Started docker.service. Dec 13 14:05:50.181734 env[1325]: time="2024-12-13T14:05:50.181627778Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:05:50.832089 env[1212]: time="2024-12-13T14:05:50.832035338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:05:51.539961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890278111.mount: Deactivated successfully. Dec 13 14:05:53.055065 env[1212]: time="2024-12-13T14:05:53.055005658Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:53.056542 env[1212]: time="2024-12-13T14:05:53.056502738Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:53.058094 env[1212]: time="2024-12-13T14:05:53.058056658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:53.060422 env[1212]: time="2024-12-13T14:05:53.060389458Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:53.061157 env[1212]: time="2024-12-13T14:05:53.061120778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:05:53.070785 env[1212]: time="2024-12-13T14:05:53.070748138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:05:54.962314 env[1212]: time="2024-12-13T14:05:54.962253778Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:54.964256 env[1212]: time="2024-12-13T14:05:54.964219338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:54.966473 env[1212]: time="2024-12-13T14:05:54.966439538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:54.968574 env[1212]: time="2024-12-13T14:05:54.968543818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:54.969282 env[1212]: time="2024-12-13T14:05:54.969243498Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:05:54.979990 env[1212]: time="2024-12-13T14:05:54.979951498Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:05:55.305826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:05:55.305994 systemd[1]: Stopped kubelet.service. Dec 13 14:05:55.307466 systemd[1]: Starting kubelet.service... Dec 13 14:05:55.390363 systemd[1]: Started kubelet.service. Dec 13 14:05:55.430691 kubelet[1481]: E1213 14:05:55.430635 1481 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:05:55.433876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:05:55.433997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:05:56.204707 env[1212]: time="2024-12-13T14:05:56.204653338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:56.206430 env[1212]: time="2024-12-13T14:05:56.206389418Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:56.208370 env[1212]: time="2024-12-13T14:05:56.208338938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:56.210357 env[1212]: time="2024-12-13T14:05:56.210326178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:56.211197 env[1212]: time="2024-12-13T14:05:56.211157698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:05:56.220717 env[1212]: time="2024-12-13T14:05:56.220687058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:05:57.350312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656239173.mount: Deactivated successfully. Dec 13 14:05:57.772372 env[1212]: time="2024-12-13T14:05:57.772260738Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:57.773807 env[1212]: time="2024-12-13T14:05:57.773772218Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:57.774938 env[1212]: time="2024-12-13T14:05:57.774914498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:57.776016 env[1212]: time="2024-12-13T14:05:57.775989458Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:57.776388 env[1212]: time="2024-12-13T14:05:57.776360898Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:05:57.785874 env[1212]: time="2024-12-13T14:05:57.785832538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:05:58.366504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864825090.mount: Deactivated successfully. Dec 13 14:05:59.124963 env[1212]: time="2024-12-13T14:05:59.124902298Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:59.126531 env[1212]: time="2024-12-13T14:05:59.126504738Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:59.128430 env[1212]: time="2024-12-13T14:05:59.128399098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:59.130242 env[1212]: time="2024-12-13T14:05:59.130204618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:59.131100 env[1212]: time="2024-12-13T14:05:59.131067978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:05:59.139286 env[1212]: time="2024-12-13T14:05:59.139262178Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:05:59.550526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046817052.mount: Deactivated successfully. Dec 13 14:05:59.553803 env[1212]: time="2024-12-13T14:05:59.553761818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:59.555935 env[1212]: time="2024-12-13T14:05:59.555906378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:59.557334 env[1212]: time="2024-12-13T14:05:59.557299098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:59.558818 env[1212]: time="2024-12-13T14:05:59.558788578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:05:59.559560 env[1212]: time="2024-12-13T14:05:59.559530698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:05:59.569031 env[1212]: time="2024-12-13T14:05:59.568978138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:06:00.132100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670636916.mount: Deactivated successfully. Dec 13 14:06:02.135020 env[1212]: time="2024-12-13T14:06:02.134969538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:02.136779 env[1212]: time="2024-12-13T14:06:02.136740738Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:02.138622 env[1212]: time="2024-12-13T14:06:02.138593938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:02.140321 env[1212]: time="2024-12-13T14:06:02.140295218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:02.142035 env[1212]: time="2024-12-13T14:06:02.142002978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:06:05.555838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:06:05.556012 systemd[1]: Stopped kubelet.service. Dec 13 14:06:05.557466 systemd[1]: Starting kubelet.service... Dec 13 14:06:05.641197 systemd[1]: Started kubelet.service. Dec 13 14:06:05.693754 kubelet[1592]: E1213 14:06:05.693671 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:05.697387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:05.697522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:07.466624 systemd[1]: Stopped kubelet.service. Dec 13 14:06:07.468699 systemd[1]: Starting kubelet.service... Dec 13 14:06:07.488624 systemd[1]: Reloading. Dec 13 14:06:07.553611 /usr/lib/systemd/system-generators/torcx-generator[1625]: time="2024-12-13T14:06:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:07.553644 /usr/lib/systemd/system-generators/torcx-generator[1625]: time="2024-12-13T14:06:07Z" level=info msg="torcx already run" Dec 13 14:06:07.802481 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:07.802697 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:07.822758 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:07.891834 systemd[1]: Started kubelet.service. Dec 13 14:06:07.895317 systemd[1]: Stopping kubelet.service... Dec 13 14:06:07.895767 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:06:07.896040 systemd[1]: Stopped kubelet.service. Dec 13 14:06:07.897790 systemd[1]: Starting kubelet.service... Dec 13 14:06:07.973563 systemd[1]: Started kubelet.service. Dec 13 14:06:08.016702 kubelet[1673]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:06:08.017079 kubelet[1673]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:06:08.017134 kubelet[1673]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:06:08.017707 kubelet[1673]: I1213 14:06:08.017633 1673 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:06:08.651708 kubelet[1673]: I1213 14:06:08.651666 1673 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:06:08.651708 kubelet[1673]: I1213 14:06:08.651700 1673 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:06:08.652033 kubelet[1673]: I1213 14:06:08.652009 1673 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:06:08.674821 kubelet[1673]: I1213 14:06:08.674775 1673 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:06:08.675052 kubelet[1673]: E1213 14:06:08.675028 1673 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.682859 kubelet[1673]: I1213 14:06:08.682835 1673 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:06:08.683050 kubelet[1673]: I1213 14:06:08.683039 1673 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:06:08.683246 kubelet[1673]: I1213 14:06:08.683225 1673 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:06:08.683320 kubelet[1673]: I1213 14:06:08.683257 1673 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:06:08.683320 kubelet[1673]: I1213 14:06:08.683267 1673 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:06:08.684895 kubelet[1673]: I1213 14:06:08.684860 1673 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:06:08.688918 kubelet[1673]: I1213 14:06:08.688892 1673 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:06:08.688918 kubelet[1673]: I1213 14:06:08.688923 1673 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:06:08.688995 kubelet[1673]: I1213 14:06:08.688944 1673 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:06:08.689071 kubelet[1673]: I1213 14:06:08.689049 1673 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:06:08.691712 kubelet[1673]: W1213 14:06:08.691646 1673 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.691712 kubelet[1673]: E1213 14:06:08.691705 1673 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.692087 kubelet[1673]: W1213 14:06:08.692013 1673 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.692087 kubelet[1673]: E1213 14:06:08.692059 1673 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.692331 kubelet[1673]: I1213 14:06:08.692240 1673 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:06:08.692712 kubelet[1673]: I1213 14:06:08.692688 1673 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:06:08.695777 kubelet[1673]: W1213 14:06:08.695747 1673 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:06:08.696554 kubelet[1673]: I1213 14:06:08.696536 1673 server.go:1256] "Started kubelet" Dec 13 14:06:08.696598 kubelet[1673]: I1213 14:06:08.696592 1673 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:06:08.697464 kubelet[1673]: I1213 14:06:08.697364 1673 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:06:08.701682 kubelet[1673]: I1213 14:06:08.700543 1673 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:06:08.701682 kubelet[1673]: I1213 14:06:08.700760 1673 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:06:08.703794 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:06:08.708894 kubelet[1673]: I1213 14:06:08.708864 1673 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:06:08.713074 kubelet[1673]: E1213 14:06:08.711624 1673 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:06:08.713074 kubelet[1673]: I1213 14:06:08.711651 1673 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:06:08.713074 kubelet[1673]: I1213 14:06:08.711769 1673 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:06:08.713074 kubelet[1673]: I1213 14:06:08.711833 1673 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:06:08.713074 kubelet[1673]: W1213 14:06:08.712207 1673 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.713074 kubelet[1673]: E1213 14:06:08.712256 1673 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.713074 kubelet[1673]: E1213 14:06:08.712710 1673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="200ms" Dec 13 14:06:08.713301 kubelet[1673]: I1213 14:06:08.713165 1673 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:06:08.713301 kubelet[1673]: I1213 14:06:08.713264 1673 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:06:08.713647 kubelet[1673]: E1213 14:06:08.713622 1673 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c1a47998edba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:06:08.696503738 +0000 UTC m=+0.718984001,LastTimestamp:2024-12-13 14:06:08.696503738 +0000 UTC m=+0.718984001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:06:08.713839 kubelet[1673]: E1213 14:06:08.713823 1673 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:06:08.715591 kubelet[1673]: I1213 14:06:08.715571 1673 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:06:08.729719 kubelet[1673]: I1213 14:06:08.729695 1673 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:06:08.729833 kubelet[1673]: I1213 14:06:08.729823 1673 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:06:08.729910 kubelet[1673]: I1213 14:06:08.729900 1673 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:06:08.730074 kubelet[1673]: I1213 14:06:08.730045 1673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:06:08.731014 kubelet[1673]: I1213 14:06:08.730990 1673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:06:08.731014 kubelet[1673]: I1213 14:06:08.731019 1673 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:06:08.731101 kubelet[1673]: I1213 14:06:08.731035 1673 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:06:08.731245 kubelet[1673]: E1213 14:06:08.731231 1673 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:06:08.731901 kubelet[1673]: W1213 14:06:08.731854 1673 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.731970 kubelet[1673]: E1213 14:06:08.731908 1673 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:08.795521 kubelet[1673]: I1213 14:06:08.795484 1673 policy_none.go:49] "None policy: Start" Dec 13 14:06:08.796466 kubelet[1673]: I1213 14:06:08.796427 1673 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:06:08.796599 kubelet[1673]: I1213 14:06:08.796586 1673 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:06:08.806458 systemd[1]: Created slice kubepods.slice. Dec 13 14:06:08.810691 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:06:08.813001 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:06:08.813684 kubelet[1673]: I1213 14:06:08.813664 1673 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:06:08.815941 kubelet[1673]: E1213 14:06:08.815914 1673 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Dec 13 14:06:08.830089 kubelet[1673]: I1213 14:06:08.830056 1673 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:06:08.830373 kubelet[1673]: I1213 14:06:08.830348 1673 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:06:08.831402 kubelet[1673]: I1213 14:06:08.831381 1673 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:06:08.832755 kubelet[1673]: I1213 14:06:08.832737 1673 topology_manager.go:215] "Topology Admit Handler" podUID="6c1742dc21c6ef261b877581898fd23b" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:06:08.834811 kubelet[1673]: E1213 14:06:08.832927 1673 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 14:06:08.835943 kubelet[1673]: I1213 14:06:08.835927 1673 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:06:08.840132 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 14:06:08.853743 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 14:06:08.867278 systemd[1]: Created slice kubepods-burstable-pod6c1742dc21c6ef261b877581898fd23b.slice. Dec 13 14:06:08.913879 kubelet[1673]: I1213 14:06:08.912297 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c1742dc21c6ef261b877581898fd23b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c1742dc21c6ef261b877581898fd23b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:06:08.914185 kubelet[1673]: I1213 14:06:08.914112 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:08.914254 kubelet[1673]: I1213 14:06:08.914195 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:08.914254 kubelet[1673]: I1213 14:06:08.914225 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:08.914254 kubelet[1673]: I1213 14:06:08.914246 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:06:08.914331 kubelet[1673]: I1213 14:06:08.914265 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c1742dc21c6ef261b877581898fd23b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c1742dc21c6ef261b877581898fd23b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:06:08.914331 kubelet[1673]: I1213 14:06:08.914316 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c1742dc21c6ef261b877581898fd23b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c1742dc21c6ef261b877581898fd23b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:06:08.914375 kubelet[1673]: I1213 14:06:08.914336 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:08.914397 kubelet[1673]: I1213 14:06:08.914356 1673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:08.914805 kubelet[1673]: E1213 14:06:08.914781 1673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="400ms" Dec 13 14:06:09.017593 kubelet[1673]: I1213 14:06:09.017567 1673 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:06:09.017986 kubelet[1673]: E1213 14:06:09.017973 1673 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Dec 13 14:06:09.153721 kubelet[1673]: E1213 14:06:09.153691 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:09.156062 env[1212]: time="2024-12-13T14:06:09.156014938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:09.165425 kubelet[1673]: E1213 14:06:09.165234 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:09.166647 env[1212]: time="2024-12-13T14:06:09.166606098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:09.169775 kubelet[1673]: E1213 14:06:09.169754 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:09.170200 env[1212]: time="2024-12-13T14:06:09.170160618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c1742dc21c6ef261b877581898fd23b,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:09.315724 kubelet[1673]: E1213 14:06:09.315689 1673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="800ms" Dec 13 14:06:09.419308 kubelet[1673]: I1213 14:06:09.419225 1673 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:06:09.419689 kubelet[1673]: E1213 14:06:09.419660 1673 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Dec 13 14:06:09.511251 kubelet[1673]: W1213 14:06:09.511184 1673 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:09.511251 kubelet[1673]: E1213 14:06:09.511251 1673 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:09.536880 kubelet[1673]: E1213 14:06:09.536848 1673 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c1a47998edba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:06:08.696503738 +0000 UTC m=+0.718984001,LastTimestamp:2024-12-13 14:06:08.696503738 +0000 UTC m=+0.718984001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:06:09.559187 kubelet[1673]: W1213 14:06:09.559154 1673 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:09.559238 kubelet[1673]: E1213 14:06:09.559193 1673 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:09.574529 kubelet[1673]: W1213 14:06:09.574483 1673 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:09.574529 kubelet[1673]: E1213 14:06:09.574526 1673 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:09.659208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472339231.mount: Deactivated successfully. Dec 13 14:06:09.662615 env[1212]: time="2024-12-13T14:06:09.662478618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.664720 env[1212]: time="2024-12-13T14:06:09.664657138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.666374 env[1212]: time="2024-12-13T14:06:09.666344178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.667414 env[1212]: time="2024-12-13T14:06:09.667388538Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.673843 env[1212]: time="2024-12-13T14:06:09.673760938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.677034 env[1212]: time="2024-12-13T14:06:09.677007618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.678521 env[1212]: time="2024-12-13T14:06:09.678492018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.680040 env[1212]: time="2024-12-13T14:06:09.680012098Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.681467 env[1212]: time="2024-12-13T14:06:09.681433138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.682074 env[1212]: time="2024-12-13T14:06:09.682050978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.682711 env[1212]: time="2024-12-13T14:06:09.682689178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.683541 env[1212]: time="2024-12-13T14:06:09.683514618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:09.701946 env[1212]: time="2024-12-13T14:06:09.701864698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:09.701946 env[1212]: time="2024-12-13T14:06:09.701906058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:09.701946 env[1212]: time="2024-12-13T14:06:09.701916778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:09.702275 env[1212]: time="2024-12-13T14:06:09.702080818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:09.702275 env[1212]: time="2024-12-13T14:06:09.702116058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:09.702275 env[1212]: time="2024-12-13T14:06:09.702126378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:09.702452 env[1212]: time="2024-12-13T14:06:09.702360978Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87a00cc35de8db6e2010733bc492c76a1b098637d5592da9a924bc81a141742c pid=1719 runtime=io.containerd.runc.v2 Dec 13 14:06:09.702553 env[1212]: time="2024-12-13T14:06:09.702508258Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc41453f1eb511edf5e9c9824f14fd18b963ebd8be1e8a0bf864fc5bc64560d4 pid=1721 runtime=io.containerd.runc.v2 Dec 13 14:06:09.705126 env[1212]: time="2024-12-13T14:06:09.705043458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:09.705126 env[1212]: time="2024-12-13T14:06:09.705080698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:09.705126 env[1212]: time="2024-12-13T14:06:09.705091258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:09.705406 env[1212]: time="2024-12-13T14:06:09.705359898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2202c26c50e64dea9a2c77abfc458158eb171667ae902a0bc9d8fd1f185cef61 pid=1748 runtime=io.containerd.runc.v2 Dec 13 14:06:09.714233 systemd[1]: Started cri-containerd-dc41453f1eb511edf5e9c9824f14fd18b963ebd8be1e8a0bf864fc5bc64560d4.scope. Dec 13 14:06:09.719025 systemd[1]: Started cri-containerd-2202c26c50e64dea9a2c77abfc458158eb171667ae902a0bc9d8fd1f185cef61.scope. Dec 13 14:06:09.739250 systemd[1]: Started cri-containerd-87a00cc35de8db6e2010733bc492c76a1b098637d5592da9a924bc81a141742c.scope. Dec 13 14:06:09.756256 kubelet[1673]: W1213 14:06:09.755159 1673 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:09.756256 kubelet[1673]: E1213 14:06:09.755217 1673 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Dec 13 14:06:09.784723 env[1212]: time="2024-12-13T14:06:09.784606218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c1742dc21c6ef261b877581898fd23b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2202c26c50e64dea9a2c77abfc458158eb171667ae902a0bc9d8fd1f185cef61\"" Dec 13 14:06:09.786179 kubelet[1673]: E1213 14:06:09.786130 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:09.787159 env[1212]: time="2024-12-13T14:06:09.787115618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc41453f1eb511edf5e9c9824f14fd18b963ebd8be1e8a0bf864fc5bc64560d4\"" Dec 13 14:06:09.793080 env[1212]: time="2024-12-13T14:06:09.792634858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"87a00cc35de8db6e2010733bc492c76a1b098637d5592da9a924bc81a141742c\"" Dec 13 14:06:09.793146 kubelet[1673]: E1213 14:06:09.792920 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:09.793650 env[1212]: time="2024-12-13T14:06:09.793383458Z" level=info msg="CreateContainer within sandbox \"2202c26c50e64dea9a2c77abfc458158eb171667ae902a0bc9d8fd1f185cef61\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:06:09.793711 kubelet[1673]: E1213 14:06:09.793404 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:09.794592 env[1212]: time="2024-12-13T14:06:09.794561898Z" level=info msg="CreateContainer within sandbox \"dc41453f1eb511edf5e9c9824f14fd18b963ebd8be1e8a0bf864fc5bc64560d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:06:09.795215 env[1212]: time="2024-12-13T14:06:09.795153818Z" level=info msg="CreateContainer within sandbox \"87a00cc35de8db6e2010733bc492c76a1b098637d5592da9a924bc81a141742c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:06:09.807611 env[1212]: time="2024-12-13T14:06:09.807558058Z" level=info msg="CreateContainer within sandbox \"2202c26c50e64dea9a2c77abfc458158eb171667ae902a0bc9d8fd1f185cef61\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"00cd763ac7652e6ef3a128b341b8f82af8305cac627280705d206129b95c2df1\"" Dec 13 14:06:09.808347 env[1212]: time="2024-12-13T14:06:09.808311738Z" level=info msg="StartContainer for \"00cd763ac7652e6ef3a128b341b8f82af8305cac627280705d206129b95c2df1\"" Dec 13 14:06:09.814600 env[1212]: time="2024-12-13T14:06:09.814550858Z" level=info msg="CreateContainer within sandbox \"dc41453f1eb511edf5e9c9824f14fd18b963ebd8be1e8a0bf864fc5bc64560d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bd9ab980dca42c7c969d8ad683ab9bd5ed809d4fa1cf6f08e945b498efcec98e\"" Dec 13 14:06:09.815074 env[1212]: time="2024-12-13T14:06:09.815047338Z" level=info msg="StartContainer for \"bd9ab980dca42c7c969d8ad683ab9bd5ed809d4fa1cf6f08e945b498efcec98e\"" Dec 13 14:06:09.815282 env[1212]: time="2024-12-13T14:06:09.815254098Z" level=info msg="CreateContainer within sandbox \"87a00cc35de8db6e2010733bc492c76a1b098637d5592da9a924bc81a141742c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ad8c683b7b206b1ec7bb0b8e1f2f6f3b3049ad85553c7d8710b338e68f40322a\"" Dec 13 14:06:09.815665 env[1212]: time="2024-12-13T14:06:09.815635058Z" level=info msg="StartContainer for \"ad8c683b7b206b1ec7bb0b8e1f2f6f3b3049ad85553c7d8710b338e68f40322a\"" Dec 13 14:06:09.825673 systemd[1]: Started cri-containerd-00cd763ac7652e6ef3a128b341b8f82af8305cac627280705d206129b95c2df1.scope. Dec 13 14:06:09.837099 systemd[1]: Started cri-containerd-bd9ab980dca42c7c969d8ad683ab9bd5ed809d4fa1cf6f08e945b498efcec98e.scope. Dec 13 14:06:09.839911 systemd[1]: Started cri-containerd-ad8c683b7b206b1ec7bb0b8e1f2f6f3b3049ad85553c7d8710b338e68f40322a.scope. Dec 13 14:06:09.895921 env[1212]: time="2024-12-13T14:06:09.895882058Z" level=info msg="StartContainer for \"00cd763ac7652e6ef3a128b341b8f82af8305cac627280705d206129b95c2df1\" returns successfully" Dec 13 14:06:09.897429 env[1212]: time="2024-12-13T14:06:09.897378858Z" level=info msg="StartContainer for \"bd9ab980dca42c7c969d8ad683ab9bd5ed809d4fa1cf6f08e945b498efcec98e\" returns successfully" Dec 13 14:06:09.927865 env[1212]: time="2024-12-13T14:06:09.927785098Z" level=info msg="StartContainer for \"ad8c683b7b206b1ec7bb0b8e1f2f6f3b3049ad85553c7d8710b338e68f40322a\" returns successfully" Dec 13 14:06:10.221758 kubelet[1673]: I1213 14:06:10.221649 1673 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:06:10.738319 kubelet[1673]: E1213 14:06:10.738245 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:10.739113 kubelet[1673]: E1213 14:06:10.739096 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:10.740517 kubelet[1673]: E1213 14:06:10.740502 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:11.742496 kubelet[1673]: E1213 14:06:11.742469 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:11.804557 kubelet[1673]: E1213 14:06:11.804495 1673 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 14:06:11.908471 kubelet[1673]: I1213 14:06:11.908439 1673 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:06:11.917413 kubelet[1673]: E1213 14:06:11.917390 1673 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:06:12.017766 kubelet[1673]: E1213 14:06:12.017662 1673 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:06:12.118327 kubelet[1673]: E1213 14:06:12.118290 1673 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:06:12.691977 kubelet[1673]: I1213 14:06:12.691937 1673 apiserver.go:52] "Watching apiserver" Dec 13 14:06:12.712548 kubelet[1673]: I1213 14:06:12.712518 1673 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:06:13.523020 kubelet[1673]: E1213 14:06:13.522990 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:13.540333 kubelet[1673]: E1213 14:06:13.540299 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:13.746561 kubelet[1673]: E1213 14:06:13.746527 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:13.747617 kubelet[1673]: E1213 14:06:13.747012 1673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:14.630512 systemd[1]: Reloading. Dec 13 14:06:14.679000 /usr/lib/systemd/system-generators/torcx-generator[1971]: time="2024-12-13T14:06:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:14.680011 /usr/lib/systemd/system-generators/torcx-generator[1971]: time="2024-12-13T14:06:14Z" level=info msg="torcx already run" Dec 13 14:06:14.737191 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:14.737211 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:14.754686 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:14.835499 kubelet[1673]: I1213 14:06:14.835448 1673 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:06:14.835608 systemd[1]: Stopping kubelet.service... Dec 13 14:06:14.855540 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:06:14.855710 systemd[1]: Stopped kubelet.service. Dec 13 14:06:14.855751 systemd[1]: kubelet.service: Consumed 1.076s CPU time. Dec 13 14:06:14.857149 systemd[1]: Starting kubelet.service... Dec 13 14:06:14.934540 systemd[1]: Started kubelet.service. Dec 13 14:06:15.015846 kubelet[2012]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:06:15.015846 kubelet[2012]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:06:15.015846 kubelet[2012]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:06:15.015846 kubelet[2012]: I1213 14:06:15.015302 2012 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:06:15.019520 kubelet[2012]: I1213 14:06:15.019495 2012 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:06:15.019520 kubelet[2012]: I1213 14:06:15.019516 2012 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:06:15.020546 kubelet[2012]: I1213 14:06:15.019710 2012 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:06:15.021723 kubelet[2012]: I1213 14:06:15.021260 2012 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:06:15.024212 kubelet[2012]: I1213 14:06:15.024149 2012 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:06:15.033888 kubelet[2012]: I1213 14:06:15.033868 2012 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:06:15.034085 kubelet[2012]: I1213 14:06:15.034071 2012 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:06:15.034531 kubelet[2012]: I1213 14:06:15.034300 2012 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:06:15.034531 kubelet[2012]: I1213 14:06:15.034327 2012 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:06:15.034531 kubelet[2012]: I1213 14:06:15.034337 2012 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:06:15.034531 kubelet[2012]: I1213 14:06:15.034367 2012 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:06:15.034531 kubelet[2012]: I1213 14:06:15.034464 2012 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:06:15.034531 kubelet[2012]: I1213 14:06:15.034478 2012 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:06:15.034531 kubelet[2012]: I1213 14:06:15.034498 2012 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:06:15.034789 kubelet[2012]: I1213 14:06:15.034512 2012 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:06:15.039960 kubelet[2012]: I1213 14:06:15.039939 2012 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:06:15.039997 sudo[2027]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:06:15.040249 sudo[2027]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:06:15.040529 kubelet[2012]: I1213 14:06:15.040511 2012 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:06:15.040982 kubelet[2012]: I1213 14:06:15.040964 2012 server.go:1256] "Started kubelet" Dec 13 14:06:15.041946 kubelet[2012]: I1213 14:06:15.041923 2012 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:06:15.043033 kubelet[2012]: I1213 14:06:15.043009 2012 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:06:15.043728 kubelet[2012]: I1213 14:06:15.043707 2012 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:06:15.050823 kubelet[2012]: I1213 14:06:15.050790 2012 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:06:15.050973 kubelet[2012]: I1213 14:06:15.050947 2012 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:06:15.051532 kubelet[2012]: I1213 14:06:15.051508 2012 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:06:15.051642 kubelet[2012]: I1213 14:06:15.051624 2012 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:06:15.051863 kubelet[2012]: I1213 14:06:15.051848 2012 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:06:15.068897 kubelet[2012]: I1213 14:06:15.068766 2012 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:06:15.068897 kubelet[2012]: I1213 14:06:15.068857 2012 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:06:15.071012 kubelet[2012]: I1213 14:06:15.070752 2012 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:06:15.092304 kubelet[2012]: I1213 14:06:15.092279 2012 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:06:15.096892 kubelet[2012]: I1213 14:06:15.095455 2012 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:06:15.096892 kubelet[2012]: I1213 14:06:15.095483 2012 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:06:15.096892 kubelet[2012]: I1213 14:06:15.095501 2012 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:06:15.096892 kubelet[2012]: E1213 14:06:15.095544 2012 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:06:15.115209 kubelet[2012]: I1213 14:06:15.115189 2012 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:06:15.115209 kubelet[2012]: I1213 14:06:15.115207 2012 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:06:15.115322 kubelet[2012]: I1213 14:06:15.115225 2012 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:06:15.115382 kubelet[2012]: I1213 14:06:15.115369 2012 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:06:15.115414 kubelet[2012]: I1213 14:06:15.115394 2012 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:06:15.115414 kubelet[2012]: I1213 14:06:15.115401 2012 policy_none.go:49] "None policy: Start" Dec 13 14:06:15.116040 kubelet[2012]: I1213 14:06:15.116023 2012 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:06:15.116159 kubelet[2012]: I1213 14:06:15.116150 2012 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:06:15.116376 kubelet[2012]: I1213 14:06:15.116364 2012 state_mem.go:75] "Updated machine memory state" Dec 13 14:06:15.121104 kubelet[2012]: I1213 14:06:15.121081 2012 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:06:15.122507 kubelet[2012]: I1213 14:06:15.122493 2012 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:06:15.155589 kubelet[2012]: I1213 14:06:15.155567 2012 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:06:15.162468 kubelet[2012]: I1213 14:06:15.162446 2012 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 14:06:15.162551 kubelet[2012]: I1213 14:06:15.162519 2012 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:06:15.196204 kubelet[2012]: I1213 14:06:15.196113 2012 topology_manager.go:215] "Topology Admit Handler" podUID="6c1742dc21c6ef261b877581898fd23b" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:06:15.196510 kubelet[2012]: I1213 14:06:15.196494 2012 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:06:15.196764 kubelet[2012]: I1213 14:06:15.196750 2012 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:06:15.203624 kubelet[2012]: E1213 14:06:15.203600 2012 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 14:06:15.203714 kubelet[2012]: E1213 14:06:15.203626 2012 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:15.252971 kubelet[2012]: I1213 14:06:15.252950 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c1742dc21c6ef261b877581898fd23b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c1742dc21c6ef261b877581898fd23b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:06:15.253031 kubelet[2012]: I1213 14:06:15.253021 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c1742dc21c6ef261b877581898fd23b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c1742dc21c6ef261b877581898fd23b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:06:15.253075 kubelet[2012]: I1213 14:06:15.253050 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c1742dc21c6ef261b877581898fd23b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c1742dc21c6ef261b877581898fd23b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:06:15.253133 kubelet[2012]: I1213 14:06:15.253122 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:15.253169 kubelet[2012]: I1213 14:06:15.253150 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:15.253223 kubelet[2012]: I1213 14:06:15.253210 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:06:15.253262 kubelet[2012]: I1213 14:06:15.253235 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:15.253301 kubelet[2012]: I1213 14:06:15.253281 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:15.253327 kubelet[2012]: I1213 14:06:15.253312 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:15.488661 sudo[2027]: pam_unix(sudo:session): session closed for user root Dec 13 14:06:15.502930 kubelet[2012]: E1213 14:06:15.502896 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:15.504580 kubelet[2012]: E1213 14:06:15.504562 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:15.504786 kubelet[2012]: E1213 14:06:15.504763 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:16.035697 kubelet[2012]: I1213 14:06:16.035654 2012 apiserver.go:52] "Watching apiserver" Dec 13 14:06:16.052709 kubelet[2012]: I1213 14:06:16.052669 2012 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:06:16.104590 kubelet[2012]: E1213 14:06:16.104549 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:16.119812 kubelet[2012]: E1213 14:06:16.119769 2012 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 14:06:16.120313 kubelet[2012]: E1213 14:06:16.120294 2012 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 14:06:16.120613 kubelet[2012]: E1213 14:06:16.120328 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:16.120873 kubelet[2012]: E1213 14:06:16.120860 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:16.122948 kubelet[2012]: I1213 14:06:16.122927 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.122891926 podStartE2EDuration="1.122891926s" podCreationTimestamp="2024-12-13 14:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:06:16.121713123 +0000 UTC m=+1.182827746" watchObservedRunningTime="2024-12-13 14:06:16.122891926 +0000 UTC m=+1.184006509" Dec 13 14:06:16.129142 kubelet[2012]: I1213 14:06:16.129108 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.129081422 podStartE2EDuration="3.129081422s" podCreationTimestamp="2024-12-13 14:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:06:16.129043061 +0000 UTC m=+1.190157684" watchObservedRunningTime="2024-12-13 14:06:16.129081422 +0000 UTC m=+1.190196045" Dec 13 14:06:16.143359 kubelet[2012]: I1213 14:06:16.143339 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.143300058 podStartE2EDuration="3.143300058s" podCreationTimestamp="2024-12-13 14:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:06:16.135791719 +0000 UTC m=+1.196906342" watchObservedRunningTime="2024-12-13 14:06:16.143300058 +0000 UTC m=+1.204414681" Dec 13 14:06:17.105795 kubelet[2012]: E1213 14:06:17.105767 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:17.106122 kubelet[2012]: E1213 14:06:17.105896 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:17.106122 kubelet[2012]: E1213 14:06:17.105948 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:17.351379 sudo[1313]: pam_unix(sudo:session): session closed for user root Dec 13 14:06:17.353008 sshd[1309]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:17.355602 systemd-logind[1199]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:06:17.356646 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:06:17.356823 systemd[1]: session-5.scope: Consumed 7.711s CPU time. Dec 13 14:06:17.357633 systemd-logind[1199]: Removed session 5. Dec 13 14:06:17.358085 systemd[1]: sshd@4-10.0.0.68:22-10.0.0.1:48016.service: Deactivated successfully. Dec 13 14:06:18.109369 kubelet[2012]: E1213 14:06:18.107635 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:18.109369 kubelet[2012]: E1213 14:06:18.108455 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:20.214517 kubelet[2012]: E1213 14:06:20.214485 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:21.111864 kubelet[2012]: E1213 14:06:21.111831 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:22.112692 kubelet[2012]: E1213 14:06:22.112661 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:26.662472 kubelet[2012]: E1213 14:06:26.662445 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:27.820013 kubelet[2012]: E1213 14:06:27.818925 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:29.404572 update_engine[1202]: I1213 14:06:29.404528 1202 update_attempter.cc:509] Updating boot flags... Dec 13 14:06:29.474230 kubelet[2012]: I1213 14:06:29.474005 2012 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:06:29.474902 env[1212]: time="2024-12-13T14:06:29.474802151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:06:29.475142 kubelet[2012]: I1213 14:06:29.474948 2012 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:06:30.269303 kubelet[2012]: I1213 14:06:30.269263 2012 topology_manager.go:215] "Topology Admit Handler" podUID="3bc95727-d56d-4040-9398-696c9723233a" podNamespace="kube-system" podName="kube-proxy-wqflf" Dec 13 14:06:30.275155 kubelet[2012]: I1213 14:06:30.275111 2012 topology_manager.go:215] "Topology Admit Handler" podUID="6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" podNamespace="kube-system" podName="cilium-kfcrr" Dec 13 14:06:30.276145 systemd[1]: Created slice kubepods-besteffort-pod3bc95727_d56d_4040_9398_696c9723233a.slice. Dec 13 14:06:30.286879 systemd[1]: Created slice kubepods-burstable-pod6bb3f3b6_584e_4df1_9d4c_aa3f843d9b94.slice. Dec 13 14:06:30.353658 kubelet[2012]: I1213 14:06:30.353621 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-xtables-lock\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.353808 kubelet[2012]: I1213 14:06:30.353671 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rkkm\" (UniqueName: \"kubernetes.io/projected/3bc95727-d56d-4040-9398-696c9723233a-kube-api-access-6rkkm\") pod \"kube-proxy-wqflf\" (UID: \"3bc95727-d56d-4040-9398-696c9723233a\") " pod="kube-system/kube-proxy-wqflf" Dec 13 14:06:30.353808 kubelet[2012]: I1213 14:06:30.353692 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-cgroup\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.353808 kubelet[2012]: I1213 14:06:30.353709 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3bc95727-d56d-4040-9398-696c9723233a-kube-proxy\") pod \"kube-proxy-wqflf\" (UID: \"3bc95727-d56d-4040-9398-696c9723233a\") " pod="kube-system/kube-proxy-wqflf" Dec 13 14:06:30.353808 kubelet[2012]: I1213 14:06:30.353736 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc95727-d56d-4040-9398-696c9723233a-lib-modules\") pod \"kube-proxy-wqflf\" (UID: \"3bc95727-d56d-4040-9398-696c9723233a\") " pod="kube-system/kube-proxy-wqflf" Dec 13 14:06:30.353808 kubelet[2012]: I1213 14:06:30.353756 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-etc-cni-netd\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.353940 kubelet[2012]: I1213 14:06:30.353775 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhfpv\" (UniqueName: \"kubernetes.io/projected/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-kube-api-access-vhfpv\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.353940 kubelet[2012]: I1213 14:06:30.353806 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-config-path\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.353940 kubelet[2012]: I1213 14:06:30.353828 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-host-proc-sys-net\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.353940 kubelet[2012]: I1213 14:06:30.353846 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc95727-d56d-4040-9398-696c9723233a-xtables-lock\") pod \"kube-proxy-wqflf\" (UID: \"3bc95727-d56d-4040-9398-696c9723233a\") " pod="kube-system/kube-proxy-wqflf" Dec 13 14:06:30.353940 kubelet[2012]: I1213 14:06:30.353866 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-host-proc-sys-kernel\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.354049 kubelet[2012]: I1213 14:06:30.353893 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-lib-modules\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.354049 kubelet[2012]: I1213 14:06:30.353915 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-hubble-tls\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.354049 kubelet[2012]: I1213 14:06:30.353935 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-clustermesh-secrets\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.354049 kubelet[2012]: I1213 14:06:30.353963 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-hostproc\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.354049 kubelet[2012]: I1213 14:06:30.353997 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cni-path\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.354158 kubelet[2012]: I1213 14:06:30.354055 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-run\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.354158 kubelet[2012]: I1213 14:06:30.354103 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-bpf-maps\") pod \"cilium-kfcrr\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " pod="kube-system/cilium-kfcrr" Dec 13 14:06:30.531940 kubelet[2012]: I1213 14:06:30.531837 2012 topology_manager.go:215] "Topology Admit Handler" podUID="ba562170-4539-4094-8b09-8d65c240a38c" podNamespace="kube-system" podName="cilium-operator-5cc964979-fp4hd" Dec 13 14:06:30.540098 systemd[1]: Created slice kubepods-besteffort-podba562170_4539_4094_8b09_8d65c240a38c.slice. Dec 13 14:06:30.556474 kubelet[2012]: I1213 14:06:30.556444 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sspnk\" (UniqueName: \"kubernetes.io/projected/ba562170-4539-4094-8b09-8d65c240a38c-kube-api-access-sspnk\") pod \"cilium-operator-5cc964979-fp4hd\" (UID: \"ba562170-4539-4094-8b09-8d65c240a38c\") " pod="kube-system/cilium-operator-5cc964979-fp4hd" Dec 13 14:06:30.556774 kubelet[2012]: I1213 14:06:30.556759 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba562170-4539-4094-8b09-8d65c240a38c-cilium-config-path\") pod \"cilium-operator-5cc964979-fp4hd\" (UID: \"ba562170-4539-4094-8b09-8d65c240a38c\") " pod="kube-system/cilium-operator-5cc964979-fp4hd" Dec 13 14:06:30.583317 kubelet[2012]: E1213 14:06:30.583289 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:30.584209 env[1212]: time="2024-12-13T14:06:30.584150646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqflf,Uid:3bc95727-d56d-4040-9398-696c9723233a,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:30.589212 kubelet[2012]: E1213 14:06:30.589188 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:30.594860 env[1212]: time="2024-12-13T14:06:30.594818097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfcrr,Uid:6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:30.601389 env[1212]: time="2024-12-13T14:06:30.601320183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:30.601389 env[1212]: time="2024-12-13T14:06:30.601361983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:30.601389 env[1212]: time="2024-12-13T14:06:30.601372503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:30.601596 env[1212]: time="2024-12-13T14:06:30.601503423Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27a9e84830ddbec4a797c60b98de65092799099aed77fe2a5cc18e2ea00809cd pid=2119 runtime=io.containerd.runc.v2 Dec 13 14:06:30.608481 env[1212]: time="2024-12-13T14:06:30.608395031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:30.608607 env[1212]: time="2024-12-13T14:06:30.608478511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:30.608607 env[1212]: time="2024-12-13T14:06:30.608489911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:30.610772 env[1212]: time="2024-12-13T14:06:30.610644193Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a pid=2142 runtime=io.containerd.runc.v2 Dec 13 14:06:30.612945 systemd[1]: Started cri-containerd-27a9e84830ddbec4a797c60b98de65092799099aed77fe2a5cc18e2ea00809cd.scope. Dec 13 14:06:30.624728 systemd[1]: Started cri-containerd-0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a.scope. Dec 13 14:06:30.653850 env[1212]: time="2024-12-13T14:06:30.653807957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqflf,Uid:3bc95727-d56d-4040-9398-696c9723233a,Namespace:kube-system,Attempt:0,} returns sandbox id \"27a9e84830ddbec4a797c60b98de65092799099aed77fe2a5cc18e2ea00809cd\"" Dec 13 14:06:30.655197 kubelet[2012]: E1213 14:06:30.654633 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:30.661233 env[1212]: time="2024-12-13T14:06:30.659485683Z" level=info msg="CreateContainer within sandbox \"27a9e84830ddbec4a797c60b98de65092799099aed77fe2a5cc18e2ea00809cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:06:30.666001 env[1212]: time="2024-12-13T14:06:30.665952930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfcrr,Uid:6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\"" Dec 13 14:06:30.667200 kubelet[2012]: E1213 14:06:30.666903 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:30.668982 env[1212]: time="2024-12-13T14:06:30.668939093Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:06:30.676350 env[1212]: time="2024-12-13T14:06:30.676305900Z" level=info msg="CreateContainer within sandbox \"27a9e84830ddbec4a797c60b98de65092799099aed77fe2a5cc18e2ea00809cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fdcb3c3d119f004264be8b6ea20f3f56660c7443a1ccf00c6477960dd550ff5a\"" Dec 13 14:06:30.676954 env[1212]: time="2024-12-13T14:06:30.676928381Z" level=info msg="StartContainer for \"fdcb3c3d119f004264be8b6ea20f3f56660c7443a1ccf00c6477960dd550ff5a\"" Dec 13 14:06:30.694790 systemd[1]: Started cri-containerd-fdcb3c3d119f004264be8b6ea20f3f56660c7443a1ccf00c6477960dd550ff5a.scope. Dec 13 14:06:30.744944 env[1212]: time="2024-12-13T14:06:30.744888371Z" level=info msg="StartContainer for \"fdcb3c3d119f004264be8b6ea20f3f56660c7443a1ccf00c6477960dd550ff5a\" returns successfully" Dec 13 14:06:30.844232 kubelet[2012]: E1213 14:06:30.843766 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:30.844479 env[1212]: time="2024-12-13T14:06:30.844420913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-fp4hd,Uid:ba562170-4539-4094-8b09-8d65c240a38c,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:30.858637 env[1212]: time="2024-12-13T14:06:30.858559967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:30.858637 env[1212]: time="2024-12-13T14:06:30.858601287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:30.858637 env[1212]: time="2024-12-13T14:06:30.858611847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:30.858817 env[1212]: time="2024-12-13T14:06:30.858746847Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c pid=2264 runtime=io.containerd.runc.v2 Dec 13 14:06:30.869038 systemd[1]: Started cri-containerd-c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c.scope. Dec 13 14:06:30.910027 env[1212]: time="2024-12-13T14:06:30.909840180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-fp4hd,Uid:ba562170-4539-4094-8b09-8d65c240a38c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c\"" Dec 13 14:06:30.911826 kubelet[2012]: E1213 14:06:30.910596 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:31.128500 kubelet[2012]: E1213 14:06:31.128399 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:31.136434 kubelet[2012]: I1213 14:06:31.136389 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wqflf" podStartSLOduration=1.136353964 podStartE2EDuration="1.136353964s" podCreationTimestamp="2024-12-13 14:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:06:31.135940243 +0000 UTC m=+16.197054866" watchObservedRunningTime="2024-12-13 14:06:31.136353964 +0000 UTC m=+16.197468587" Dec 13 14:06:34.923583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597939570.mount: Deactivated successfully. Dec 13 14:06:37.204846 env[1212]: time="2024-12-13T14:06:37.204801549Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:37.206200 env[1212]: time="2024-12-13T14:06:37.206147470Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:37.207635 env[1212]: time="2024-12-13T14:06:37.207601871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:37.208252 env[1212]: time="2024-12-13T14:06:37.208222472Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:06:37.210376 env[1212]: time="2024-12-13T14:06:37.210341113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:06:37.211723 env[1212]: time="2024-12-13T14:06:37.211668634Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:06:37.226986 env[1212]: time="2024-12-13T14:06:37.226941964Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\"" Dec 13 14:06:37.227627 env[1212]: time="2024-12-13T14:06:37.227591884Z" level=info msg="StartContainer for \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\"" Dec 13 14:06:37.247777 systemd[1]: Started cri-containerd-7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23.scope. Dec 13 14:06:37.320537 systemd[1]: cri-containerd-7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23.scope: Deactivated successfully. Dec 13 14:06:37.321245 env[1212]: time="2024-12-13T14:06:37.321095665Z" level=info msg="StartContainer for \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\" returns successfully" Dec 13 14:06:37.367714 env[1212]: time="2024-12-13T14:06:37.367666456Z" level=info msg="shim disconnected" id=7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23 Dec 13 14:06:37.367952 env[1212]: time="2024-12-13T14:06:37.367932456Z" level=warning msg="cleaning up after shim disconnected" id=7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23 namespace=k8s.io Dec 13 14:06:37.368021 env[1212]: time="2024-12-13T14:06:37.368007096Z" level=info msg="cleaning up dead shim" Dec 13 14:06:37.375260 env[1212]: time="2024-12-13T14:06:37.375224821Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:06:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2437 runtime=io.containerd.runc.v2\n" Dec 13 14:06:38.153259 kubelet[2012]: E1213 14:06:38.152641 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:38.154520 env[1212]: time="2024-12-13T14:06:38.154482124Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:06:38.186967 env[1212]: time="2024-12-13T14:06:38.186913503Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\"" Dec 13 14:06:38.187483 env[1212]: time="2024-12-13T14:06:38.187454264Z" level=info msg="StartContainer for \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\"" Dec 13 14:06:38.202115 systemd[1]: Started cri-containerd-646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d.scope. Dec 13 14:06:38.225519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23-rootfs.mount: Deactivated successfully. Dec 13 14:06:38.236576 env[1212]: time="2024-12-13T14:06:38.236515374Z" level=info msg="StartContainer for \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\" returns successfully" Dec 13 14:06:38.249378 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:06:38.249595 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:06:38.250151 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:06:38.251618 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:06:38.253525 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:06:38.256413 systemd[1]: cri-containerd-646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d.scope: Deactivated successfully. Dec 13 14:06:38.261092 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:06:38.271306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d-rootfs.mount: Deactivated successfully. Dec 13 14:06:38.275060 env[1212]: time="2024-12-13T14:06:38.275017637Z" level=info msg="shim disconnected" id=646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d Dec 13 14:06:38.275060 env[1212]: time="2024-12-13T14:06:38.275059517Z" level=warning msg="cleaning up after shim disconnected" id=646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d namespace=k8s.io Dec 13 14:06:38.275256 env[1212]: time="2024-12-13T14:06:38.275068917Z" level=info msg="cleaning up dead shim" Dec 13 14:06:38.281550 env[1212]: time="2024-12-13T14:06:38.281512001Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:06:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2501 runtime=io.containerd.runc.v2\n" Dec 13 14:06:39.153542 kubelet[2012]: E1213 14:06:39.153499 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:39.155631 env[1212]: time="2024-12-13T14:06:39.155590651Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:06:39.168195 env[1212]: time="2024-12-13T14:06:39.168143698Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\"" Dec 13 14:06:39.169372 env[1212]: time="2024-12-13T14:06:39.169340019Z" level=info msg="StartContainer for \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\"" Dec 13 14:06:39.186570 systemd[1]: Started cri-containerd-88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b.scope. Dec 13 14:06:39.225289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2052563390.mount: Deactivated successfully. Dec 13 14:06:39.233288 env[1212]: time="2024-12-13T14:06:39.231764854Z" level=info msg="StartContainer for \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\" returns successfully" Dec 13 14:06:39.244385 systemd[1]: cri-containerd-88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b.scope: Deactivated successfully. Dec 13 14:06:39.260805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b-rootfs.mount: Deactivated successfully. Dec 13 14:06:39.265776 env[1212]: time="2024-12-13T14:06:39.265734754Z" level=info msg="shim disconnected" id=88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b Dec 13 14:06:39.266240 env[1212]: time="2024-12-13T14:06:39.266216114Z" level=warning msg="cleaning up after shim disconnected" id=88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b namespace=k8s.io Dec 13 14:06:39.266337 env[1212]: time="2024-12-13T14:06:39.266320354Z" level=info msg="cleaning up dead shim" Dec 13 14:06:39.272832 env[1212]: time="2024-12-13T14:06:39.272801718Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:06:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2558 runtime=io.containerd.runc.v2\n" Dec 13 14:06:40.158209 kubelet[2012]: E1213 14:06:40.158037 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:40.162381 env[1212]: time="2024-12-13T14:06:40.162312543Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:06:40.178432 env[1212]: time="2024-12-13T14:06:40.178385992Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\"" Dec 13 14:06:40.178974 env[1212]: time="2024-12-13T14:06:40.178939432Z" level=info msg="StartContainer for \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\"" Dec 13 14:06:40.198222 systemd[1]: Started cri-containerd-86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c.scope. Dec 13 14:06:40.263949 systemd[1]: cri-containerd-86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c.scope: Deactivated successfully. Dec 13 14:06:40.264715 env[1212]: time="2024-12-13T14:06:40.264353438Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bb3f3b6_584e_4df1_9d4c_aa3f843d9b94.slice/cri-containerd-86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c.scope/memory.events\": no such file or directory" Dec 13 14:06:40.270353 env[1212]: time="2024-12-13T14:06:40.270232801Z" level=info msg="StartContainer for \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\" returns successfully" Dec 13 14:06:40.287020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c-rootfs.mount: Deactivated successfully. Dec 13 14:06:40.306015 env[1212]: time="2024-12-13T14:06:40.305961300Z" level=info msg="shim disconnected" id=86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c Dec 13 14:06:40.306015 env[1212]: time="2024-12-13T14:06:40.306008940Z" level=warning msg="cleaning up after shim disconnected" id=86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c namespace=k8s.io Dec 13 14:06:40.306015 env[1212]: time="2024-12-13T14:06:40.306018860Z" level=info msg="cleaning up dead shim" Dec 13 14:06:40.314818 env[1212]: time="2024-12-13T14:06:40.314769705Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:06:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2612 runtime=io.containerd.runc.v2\n" Dec 13 14:06:40.348702 systemd[1]: Started sshd@5-10.0.0.68:22-10.0.0.1:40286.service. Dec 13 14:06:40.387785 sshd[2626]: Accepted publickey for core from 10.0.0.1 port 40286 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:40.389463 sshd[2626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:40.393702 systemd[1]: Started session-6.scope. Dec 13 14:06:40.394296 systemd-logind[1199]: New session 6 of user core. Dec 13 14:06:40.515980 sshd[2626]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:40.518504 systemd[1]: sshd@5-10.0.0.68:22-10.0.0.1:40286.service: Deactivated successfully. Dec 13 14:06:40.519228 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:06:40.520418 systemd-logind[1199]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:06:40.521157 systemd-logind[1199]: Removed session 6. Dec 13 14:06:40.525958 env[1212]: time="2024-12-13T14:06:40.525916499Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:40.527403 env[1212]: time="2024-12-13T14:06:40.527359739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:40.528660 env[1212]: time="2024-12-13T14:06:40.528628700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:06:40.529113 env[1212]: time="2024-12-13T14:06:40.529086340Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:06:40.530833 env[1212]: time="2024-12-13T14:06:40.530574661Z" level=info msg="CreateContainer within sandbox \"c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:06:40.540196 env[1212]: time="2024-12-13T14:06:40.540151186Z" level=info msg="CreateContainer within sandbox \"c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\"" Dec 13 14:06:40.540628 env[1212]: time="2024-12-13T14:06:40.540585507Z" level=info msg="StartContainer for \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\"" Dec 13 14:06:40.557718 systemd[1]: Started cri-containerd-3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1.scope. Dec 13 14:06:40.632001 env[1212]: time="2024-12-13T14:06:40.631913436Z" level=info msg="StartContainer for \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\" returns successfully" Dec 13 14:06:41.164913 kubelet[2012]: E1213 14:06:41.164880 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:41.172268 kubelet[2012]: E1213 14:06:41.172056 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:41.173205 env[1212]: time="2024-12-13T14:06:41.172462161Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:06:41.186318 env[1212]: time="2024-12-13T14:06:41.186261568Z" level=info msg="CreateContainer within sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\"" Dec 13 14:06:41.186790 env[1212]: time="2024-12-13T14:06:41.186762208Z" level=info msg="StartContainer for \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\"" Dec 13 14:06:41.210042 systemd[1]: Started cri-containerd-3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2.scope. Dec 13 14:06:41.275127 env[1212]: time="2024-12-13T14:06:41.275068933Z" level=info msg="StartContainer for \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\" returns successfully" Dec 13 14:06:41.448538 kubelet[2012]: I1213 14:06:41.447713 2012 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:06:41.465939 kubelet[2012]: I1213 14:06:41.465862 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-fp4hd" podStartSLOduration=1.849612752 podStartE2EDuration="11.465818709s" podCreationTimestamp="2024-12-13 14:06:30 +0000 UTC" firstStartedPulling="2024-12-13 14:06:30.913084543 +0000 UTC m=+15.974199206" lastFinishedPulling="2024-12-13 14:06:40.52929054 +0000 UTC m=+25.590405163" observedRunningTime="2024-12-13 14:06:41.206980698 +0000 UTC m=+26.268095281" watchObservedRunningTime="2024-12-13 14:06:41.465818709 +0000 UTC m=+26.526933332" Dec 13 14:06:41.467123 kubelet[2012]: I1213 14:06:41.467065 2012 topology_manager.go:215] "Topology Admit Handler" podUID="4cfc6ee7-46c5-42cd-9b5f-ce510a89d404" podNamespace="kube-system" podName="coredns-76f75df574-9gcfr" Dec 13 14:06:41.467282 kubelet[2012]: I1213 14:06:41.467268 2012 topology_manager.go:215] "Topology Admit Handler" podUID="fa30363a-a245-4ec2-b56f-1e2e8941a4c0" podNamespace="kube-system" podName="coredns-76f75df574-x4v84" Dec 13 14:06:41.476662 systemd[1]: Created slice kubepods-burstable-podfa30363a_a245_4ec2_b56f_1e2e8941a4c0.slice. Dec 13 14:06:41.482543 systemd[1]: Created slice kubepods-burstable-pod4cfc6ee7_46c5_42cd_9b5f_ce510a89d404.slice. Dec 13 14:06:41.539767 kubelet[2012]: I1213 14:06:41.539634 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa30363a-a245-4ec2-b56f-1e2e8941a4c0-config-volume\") pod \"coredns-76f75df574-x4v84\" (UID: \"fa30363a-a245-4ec2-b56f-1e2e8941a4c0\") " pod="kube-system/coredns-76f75df574-x4v84" Dec 13 14:06:41.539767 kubelet[2012]: I1213 14:06:41.539679 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cfc6ee7-46c5-42cd-9b5f-ce510a89d404-config-volume\") pod \"coredns-76f75df574-9gcfr\" (UID: \"4cfc6ee7-46c5-42cd-9b5f-ce510a89d404\") " pod="kube-system/coredns-76f75df574-9gcfr" Dec 13 14:06:41.539767 kubelet[2012]: I1213 14:06:41.539705 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz8fp\" (UniqueName: \"kubernetes.io/projected/fa30363a-a245-4ec2-b56f-1e2e8941a4c0-kube-api-access-cz8fp\") pod \"coredns-76f75df574-x4v84\" (UID: \"fa30363a-a245-4ec2-b56f-1e2e8941a4c0\") " pod="kube-system/coredns-76f75df574-x4v84" Dec 13 14:06:41.539767 kubelet[2012]: I1213 14:06:41.539743 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvwfq\" (UniqueName: \"kubernetes.io/projected/4cfc6ee7-46c5-42cd-9b5f-ce510a89d404-kube-api-access-tvwfq\") pod \"coredns-76f75df574-9gcfr\" (UID: \"4cfc6ee7-46c5-42cd-9b5f-ce510a89d404\") " pod="kube-system/coredns-76f75df574-9gcfr" Dec 13 14:06:41.619220 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:06:41.781843 kubelet[2012]: E1213 14:06:41.781732 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:41.782447 env[1212]: time="2024-12-13T14:06:41.782397909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x4v84,Uid:fa30363a-a245-4ec2-b56f-1e2e8941a4c0,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:41.785441 kubelet[2012]: E1213 14:06:41.785405 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:41.785881 env[1212]: time="2024-12-13T14:06:41.785837910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9gcfr,Uid:4cfc6ee7-46c5-42cd-9b5f-ce510a89d404,Namespace:kube-system,Attempt:0,}" Dec 13 14:06:41.885225 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:06:42.176704 kubelet[2012]: E1213 14:06:42.176659 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:42.177661 kubelet[2012]: E1213 14:06:42.177629 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:42.191041 kubelet[2012]: I1213 14:06:42.190745 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kfcrr" podStartSLOduration=5.649590769 podStartE2EDuration="12.190709629s" podCreationTimestamp="2024-12-13 14:06:30 +0000 UTC" firstStartedPulling="2024-12-13 14:06:30.668000452 +0000 UTC m=+15.729115075" lastFinishedPulling="2024-12-13 14:06:37.209119312 +0000 UTC m=+22.270233935" observedRunningTime="2024-12-13 14:06:42.190277828 +0000 UTC m=+27.251392491" watchObservedRunningTime="2024-12-13 14:06:42.190709629 +0000 UTC m=+27.251824252" Dec 13 14:06:43.179727 kubelet[2012]: E1213 14:06:43.179638 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:44.180825 kubelet[2012]: E1213 14:06:44.180794 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:44.304770 systemd-networkd[1032]: cilium_host: Link UP Dec 13 14:06:44.306898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:06:44.306950 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:06:44.305148 systemd-networkd[1032]: cilium_net: Link UP Dec 13 14:06:44.306072 systemd-networkd[1032]: cilium_net: Gained carrier Dec 13 14:06:44.306799 systemd-networkd[1032]: cilium_host: Gained carrier Dec 13 14:06:44.389241 systemd-networkd[1032]: cilium_vxlan: Link UP Dec 13 14:06:44.389247 systemd-networkd[1032]: cilium_vxlan: Gained carrier Dec 13 14:06:44.722211 kernel: NET: Registered PF_ALG protocol family Dec 13 14:06:44.996270 systemd-networkd[1032]: cilium_net: Gained IPv6LL Dec 13 14:06:44.996509 systemd-networkd[1032]: cilium_host: Gained IPv6LL Dec 13 14:06:45.282195 systemd-networkd[1032]: lxc_health: Link UP Dec 13 14:06:45.289710 systemd-networkd[1032]: lxc_health: Gained carrier Dec 13 14:06:45.290196 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:06:45.526335 systemd[1]: Started sshd@6-10.0.0.68:22-10.0.0.1:51488.service. Dec 13 14:06:45.567374 sshd[3207]: Accepted publickey for core from 10.0.0.1 port 51488 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:45.569103 sshd[3207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:45.573479 systemd[1]: Started session-7.scope. Dec 13 14:06:45.573628 systemd-logind[1199]: New session 7 of user core. Dec 13 14:06:45.690695 sshd[3207]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:45.692996 systemd[1]: sshd@6-10.0.0.68:22-10.0.0.1:51488.service: Deactivated successfully. Dec 13 14:06:45.693816 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:06:45.694349 systemd-logind[1199]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:06:45.694983 systemd-logind[1199]: Removed session 7. Dec 13 14:06:45.858554 systemd-networkd[1032]: lxcc90971f14f88: Link UP Dec 13 14:06:45.865198 kernel: eth0: renamed from tmpc3d1a Dec 13 14:06:45.869796 systemd-networkd[1032]: lxcae2baf5342c3: Link UP Dec 13 14:06:45.882772 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:06:45.882856 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc90971f14f88: link becomes ready Dec 13 14:06:45.882874 systemd-networkd[1032]: lxcc90971f14f88: Gained carrier Dec 13 14:06:45.883480 kernel: eth0: renamed from tmpd22c8 Dec 13 14:06:45.891213 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcae2baf5342c3: link becomes ready Dec 13 14:06:45.894963 systemd-networkd[1032]: lxcae2baf5342c3: Gained carrier Dec 13 14:06:46.153354 systemd-networkd[1032]: cilium_vxlan: Gained IPv6LL Dec 13 14:06:46.594046 kubelet[2012]: E1213 14:06:46.593969 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:47.044292 systemd-networkd[1032]: lxcc90971f14f88: Gained IPv6LL Dec 13 14:06:47.107278 systemd-networkd[1032]: lxc_health: Gained IPv6LL Dec 13 14:06:47.747289 systemd-networkd[1032]: lxcae2baf5342c3: Gained IPv6LL Dec 13 14:06:49.372026 env[1212]: time="2024-12-13T14:06:49.371958722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:49.372401 env[1212]: time="2024-12-13T14:06:49.372035842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:49.372401 env[1212]: time="2024-12-13T14:06:49.372061802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:49.372674 env[1212]: time="2024-12-13T14:06:49.372637122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3d1ad8179664ff00bc7a8bb7aaf722f87e3b0bf07333f7c40e817e90ccb573a pid=3268 runtime=io.containerd.runc.v2 Dec 13 14:06:49.373070 env[1212]: time="2024-12-13T14:06:49.373005363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:06:49.373070 env[1212]: time="2024-12-13T14:06:49.373039323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:06:49.373070 env[1212]: time="2024-12-13T14:06:49.373049483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:06:49.373243 env[1212]: time="2024-12-13T14:06:49.373205683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d22c81f833cbc275a2ac12f5cb053010c3e297c2ff03bc7a3d6deb4ed77c5b92 pid=3277 runtime=io.containerd.runc.v2 Dec 13 14:06:49.393310 systemd[1]: run-containerd-runc-k8s.io-c3d1ad8179664ff00bc7a8bb7aaf722f87e3b0bf07333f7c40e817e90ccb573a-runc.AshHB6.mount: Deactivated successfully. Dec 13 14:06:49.396609 systemd[1]: Started cri-containerd-c3d1ad8179664ff00bc7a8bb7aaf722f87e3b0bf07333f7c40e817e90ccb573a.scope. Dec 13 14:06:49.401489 systemd[1]: Started cri-containerd-d22c81f833cbc275a2ac12f5cb053010c3e297c2ff03bc7a3d6deb4ed77c5b92.scope. Dec 13 14:06:49.439631 systemd-resolved[1150]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:06:49.444740 systemd-resolved[1150]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:06:49.456326 env[1212]: time="2024-12-13T14:06:49.456282428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9gcfr,Uid:4cfc6ee7-46c5-42cd-9b5f-ce510a89d404,Namespace:kube-system,Attempt:0,} returns sandbox id \"d22c81f833cbc275a2ac12f5cb053010c3e297c2ff03bc7a3d6deb4ed77c5b92\"" Dec 13 14:06:49.459163 kubelet[2012]: E1213 14:06:49.458070 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:49.463991 env[1212]: time="2024-12-13T14:06:49.463902630Z" level=info msg="CreateContainer within sandbox \"d22c81f833cbc275a2ac12f5cb053010c3e297c2ff03bc7a3d6deb4ed77c5b92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:06:49.465785 env[1212]: time="2024-12-13T14:06:49.465752870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x4v84,Uid:fa30363a-a245-4ec2-b56f-1e2e8941a4c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3d1ad8179664ff00bc7a8bb7aaf722f87e3b0bf07333f7c40e817e90ccb573a\"" Dec 13 14:06:49.467319 kubelet[2012]: E1213 14:06:49.466353 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:49.469094 env[1212]: time="2024-12-13T14:06:49.469061671Z" level=info msg="CreateContainer within sandbox \"c3d1ad8179664ff00bc7a8bb7aaf722f87e3b0bf07333f7c40e817e90ccb573a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:06:49.486339 env[1212]: time="2024-12-13T14:06:49.486289117Z" level=info msg="CreateContainer within sandbox \"d22c81f833cbc275a2ac12f5cb053010c3e297c2ff03bc7a3d6deb4ed77c5b92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3927353e758da050201e1cb90e285ec396242a729a6d82d2b529e32bceff9b83\"" Dec 13 14:06:49.486955 env[1212]: time="2024-12-13T14:06:49.486926037Z" level=info msg="StartContainer for \"3927353e758da050201e1cb90e285ec396242a729a6d82d2b529e32bceff9b83\"" Dec 13 14:06:49.488144 env[1212]: time="2024-12-13T14:06:49.488103077Z" level=info msg="CreateContainer within sandbox \"c3d1ad8179664ff00bc7a8bb7aaf722f87e3b0bf07333f7c40e817e90ccb573a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfdce7397e28b48382c238f5f36f6a2457ffbc011dc69b40518c60dd64208221\"" Dec 13 14:06:49.488589 env[1212]: time="2024-12-13T14:06:49.488548117Z" level=info msg="StartContainer for \"bfdce7397e28b48382c238f5f36f6a2457ffbc011dc69b40518c60dd64208221\"" Dec 13 14:06:49.507817 systemd[1]: Started cri-containerd-3927353e758da050201e1cb90e285ec396242a729a6d82d2b529e32bceff9b83.scope. Dec 13 14:06:49.509938 systemd[1]: Started cri-containerd-bfdce7397e28b48382c238f5f36f6a2457ffbc011dc69b40518c60dd64208221.scope. Dec 13 14:06:49.544457 env[1212]: time="2024-12-13T14:06:49.544387654Z" level=info msg="StartContainer for \"bfdce7397e28b48382c238f5f36f6a2457ffbc011dc69b40518c60dd64208221\" returns successfully" Dec 13 14:06:49.552311 env[1212]: time="2024-12-13T14:06:49.552265017Z" level=info msg="StartContainer for \"3927353e758da050201e1cb90e285ec396242a729a6d82d2b529e32bceff9b83\" returns successfully" Dec 13 14:06:50.192140 kubelet[2012]: E1213 14:06:50.192099 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:50.197344 kubelet[2012]: E1213 14:06:50.197309 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:50.203690 kubelet[2012]: I1213 14:06:50.203647 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9gcfr" podStartSLOduration=20.203612089 podStartE2EDuration="20.203612089s" podCreationTimestamp="2024-12-13 14:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:06:50.202323129 +0000 UTC m=+35.263437752" watchObservedRunningTime="2024-12-13 14:06:50.203612089 +0000 UTC m=+35.264726712" Dec 13 14:06:50.222356 kubelet[2012]: I1213 14:06:50.222305 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-x4v84" podStartSLOduration=20.222256774 podStartE2EDuration="20.222256774s" podCreationTimestamp="2024-12-13 14:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:06:50.212937572 +0000 UTC m=+35.274052195" watchObservedRunningTime="2024-12-13 14:06:50.222256774 +0000 UTC m=+35.283371397" Dec 13 14:06:50.378509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713878016.mount: Deactivated successfully. Dec 13 14:06:50.694206 systemd[1]: Started sshd@7-10.0.0.68:22-10.0.0.1:51504.service. Dec 13 14:06:50.732413 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 51504 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:50.733944 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:50.737572 systemd-logind[1199]: New session 8 of user core. Dec 13 14:06:50.738129 systemd[1]: Started session-8.scope. Dec 13 14:06:50.848587 sshd[3426]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:50.850997 systemd[1]: sshd@7-10.0.0.68:22-10.0.0.1:51504.service: Deactivated successfully. Dec 13 14:06:50.851819 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:06:50.852303 systemd-logind[1199]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:06:50.852940 systemd-logind[1199]: Removed session 8. Dec 13 14:06:51.200181 kubelet[2012]: E1213 14:06:51.199711 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:51.201164 kubelet[2012]: E1213 14:06:51.200113 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:52.200544 kubelet[2012]: E1213 14:06:52.200516 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:52.200544 kubelet[2012]: E1213 14:06:52.200539 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:55.852635 systemd[1]: Started sshd@8-10.0.0.68:22-10.0.0.1:46234.service. Dec 13 14:06:55.888876 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 46234 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:55.890364 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:55.893655 systemd-logind[1199]: New session 9 of user core. Dec 13 14:06:55.894503 systemd[1]: Started session-9.scope. Dec 13 14:06:56.005458 sshd[3442]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:56.008481 systemd[1]: Started sshd@9-10.0.0.68:22-10.0.0.1:46244.service. Dec 13 14:06:56.008992 systemd[1]: sshd@8-10.0.0.68:22-10.0.0.1:46234.service: Deactivated successfully. Dec 13 14:06:56.009780 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:06:56.010392 systemd-logind[1199]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:06:56.011023 systemd-logind[1199]: Removed session 9. Dec 13 14:06:56.046392 sshd[3456]: Accepted publickey for core from 10.0.0.1 port 46244 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:56.047576 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:56.050831 systemd-logind[1199]: New session 10 of user core. Dec 13 14:06:56.051719 systemd[1]: Started session-10.scope. Dec 13 14:06:56.204949 sshd[3456]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:56.210619 systemd[1]: Started sshd@10-10.0.0.68:22-10.0.0.1:46250.service. Dec 13 14:06:56.216553 systemd-logind[1199]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:06:56.219803 systemd[1]: sshd@9-10.0.0.68:22-10.0.0.1:46244.service: Deactivated successfully. Dec 13 14:06:56.220545 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:06:56.223544 systemd-logind[1199]: Removed session 10. Dec 13 14:06:56.253662 sshd[3468]: Accepted publickey for core from 10.0.0.1 port 46250 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:06:56.254814 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:56.258116 systemd-logind[1199]: New session 11 of user core. Dec 13 14:06:56.258952 systemd[1]: Started session-11.scope. Dec 13 14:06:56.375347 sshd[3468]: pam_unix(sshd:session): session closed for user core Dec 13 14:06:56.377724 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:06:56.378283 systemd-logind[1199]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:06:56.378426 systemd[1]: sshd@10-10.0.0.68:22-10.0.0.1:46250.service: Deactivated successfully. Dec 13 14:06:56.379349 systemd-logind[1199]: Removed session 11. Dec 13 14:06:57.114394 kubelet[2012]: I1213 14:06:57.114357 2012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:06:57.115240 kubelet[2012]: E1213 14:06:57.115218 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:06:57.208845 kubelet[2012]: E1213 14:06:57.208806 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:01.379685 systemd[1]: Started sshd@11-10.0.0.68:22-10.0.0.1:46258.service. Dec 13 14:07:01.415579 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 46258 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:01.417023 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:01.420207 systemd-logind[1199]: New session 12 of user core. Dec 13 14:07:01.421093 systemd[1]: Started session-12.scope. Dec 13 14:07:01.526610 sshd[3484]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:01.528893 systemd[1]: sshd@11-10.0.0.68:22-10.0.0.1:46258.service: Deactivated successfully. Dec 13 14:07:01.529631 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:07:01.530095 systemd-logind[1199]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:07:01.530868 systemd-logind[1199]: Removed session 12. Dec 13 14:07:06.531008 systemd[1]: Started sshd@12-10.0.0.68:22-10.0.0.1:43316.service. Dec 13 14:07:06.566976 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 43316 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:06.568399 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:06.572350 systemd[1]: Started session-13.scope. Dec 13 14:07:06.572652 systemd-logind[1199]: New session 13 of user core. Dec 13 14:07:06.676983 sshd[3498]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:06.679862 systemd[1]: sshd@12-10.0.0.68:22-10.0.0.1:43316.service: Deactivated successfully. Dec 13 14:07:06.680482 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:07:06.680986 systemd-logind[1199]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:07:06.682044 systemd[1]: Started sshd@13-10.0.0.68:22-10.0.0.1:43320.service. Dec 13 14:07:06.682883 systemd-logind[1199]: Removed session 13. Dec 13 14:07:06.718376 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 43320 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:06.719522 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:06.722670 systemd-logind[1199]: New session 14 of user core. Dec 13 14:07:06.723464 systemd[1]: Started session-14.scope. Dec 13 14:07:06.899553 sshd[3511]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:06.903358 systemd[1]: Started sshd@14-10.0.0.68:22-10.0.0.1:43336.service. Dec 13 14:07:06.903836 systemd[1]: sshd@13-10.0.0.68:22-10.0.0.1:43320.service: Deactivated successfully. Dec 13 14:07:06.904564 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:07:06.905183 systemd-logind[1199]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:07:06.905941 systemd-logind[1199]: Removed session 14. Dec 13 14:07:06.941251 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 43336 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:06.942700 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:06.945912 systemd-logind[1199]: New session 15 of user core. Dec 13 14:07:06.946861 systemd[1]: Started session-15.scope. Dec 13 14:07:08.164529 sshd[3521]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:08.168406 systemd[1]: Started sshd@15-10.0.0.68:22-10.0.0.1:43338.service. Dec 13 14:07:08.168920 systemd[1]: sshd@14-10.0.0.68:22-10.0.0.1:43336.service: Deactivated successfully. Dec 13 14:07:08.169768 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:07:08.170433 systemd-logind[1199]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:07:08.171927 systemd-logind[1199]: Removed session 15. Dec 13 14:07:08.210052 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 43338 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:08.211352 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:08.214690 systemd-logind[1199]: New session 16 of user core. Dec 13 14:07:08.215581 systemd[1]: Started session-16.scope. Dec 13 14:07:08.432839 sshd[3539]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:08.435862 systemd[1]: Started sshd@16-10.0.0.68:22-10.0.0.1:43344.service. Dec 13 14:07:08.439692 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:07:08.440403 systemd[1]: sshd@15-10.0.0.68:22-10.0.0.1:43338.service: Deactivated successfully. Dec 13 14:07:08.441277 systemd-logind[1199]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:07:08.442019 systemd-logind[1199]: Removed session 16. Dec 13 14:07:08.473703 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 43344 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:08.474970 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:08.478141 systemd-logind[1199]: New session 17 of user core. Dec 13 14:07:08.478966 systemd[1]: Started session-17.scope. Dec 13 14:07:08.589897 sshd[3553]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:08.593460 systemd[1]: sshd@16-10.0.0.68:22-10.0.0.1:43344.service: Deactivated successfully. Dec 13 14:07:08.594162 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:07:08.595454 systemd-logind[1199]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:07:08.596153 systemd-logind[1199]: Removed session 17. Dec 13 14:07:13.592712 systemd[1]: Started sshd@17-10.0.0.68:22-10.0.0.1:34794.service. Dec 13 14:07:13.629382 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:13.630535 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:13.633960 systemd-logind[1199]: New session 18 of user core. Dec 13 14:07:13.634791 systemd[1]: Started session-18.scope. Dec 13 14:07:13.751112 sshd[3571]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:13.753864 systemd[1]: sshd@17-10.0.0.68:22-10.0.0.1:34794.service: Deactivated successfully. Dec 13 14:07:13.754592 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:07:13.755084 systemd-logind[1199]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:07:13.755717 systemd-logind[1199]: Removed session 18. Dec 13 14:07:18.756224 systemd[1]: Started sshd@18-10.0.0.68:22-10.0.0.1:34804.service. Dec 13 14:07:18.793787 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 34804 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:18.795120 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:18.799036 systemd-logind[1199]: New session 19 of user core. Dec 13 14:07:18.799929 systemd[1]: Started session-19.scope. Dec 13 14:07:18.918560 sshd[3586]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:18.921253 systemd[1]: sshd@18-10.0.0.68:22-10.0.0.1:34804.service: Deactivated successfully. Dec 13 14:07:18.921961 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:07:18.922487 systemd-logind[1199]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:07:18.923115 systemd-logind[1199]: Removed session 19. Dec 13 14:07:23.923478 systemd[1]: Started sshd@19-10.0.0.68:22-10.0.0.1:60892.service. Dec 13 14:07:23.959608 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 60892 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:23.960882 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:23.965241 systemd-logind[1199]: New session 20 of user core. Dec 13 14:07:23.965361 systemd[1]: Started session-20.scope. Dec 13 14:07:24.072086 sshd[3600]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:24.074468 systemd[1]: sshd@19-10.0.0.68:22-10.0.0.1:60892.service: Deactivated successfully. Dec 13 14:07:24.075150 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:07:24.075677 systemd-logind[1199]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:07:24.076287 systemd-logind[1199]: Removed session 20. Dec 13 14:07:29.076710 systemd[1]: Started sshd@20-10.0.0.68:22-10.0.0.1:60896.service. Dec 13 14:07:29.114601 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 60896 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:29.115995 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:29.119304 systemd-logind[1199]: New session 21 of user core. Dec 13 14:07:29.120192 systemd[1]: Started session-21.scope. Dec 13 14:07:29.224336 sshd[3613]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:29.227301 systemd[1]: sshd@20-10.0.0.68:22-10.0.0.1:60896.service: Deactivated successfully. Dec 13 14:07:29.227912 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:07:29.228527 systemd-logind[1199]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:07:29.229734 systemd[1]: Started sshd@21-10.0.0.68:22-10.0.0.1:60898.service. Dec 13 14:07:29.230493 systemd-logind[1199]: Removed session 21. Dec 13 14:07:29.265948 sshd[3626]: Accepted publickey for core from 10.0.0.1 port 60898 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:29.267061 sshd[3626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:29.270405 systemd-logind[1199]: New session 22 of user core. Dec 13 14:07:29.271415 systemd[1]: Started session-22.scope. Dec 13 14:07:31.563268 env[1212]: time="2024-12-13T14:07:31.563160671Z" level=info msg="StopContainer for \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\" with timeout 30 (s)" Dec 13 14:07:31.564854 env[1212]: time="2024-12-13T14:07:31.564820679Z" level=info msg="Stop container \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\" with signal terminated" Dec 13 14:07:31.579970 systemd[1]: run-containerd-runc-k8s.io-3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2-runc.tiQ4uw.mount: Deactivated successfully. Dec 13 14:07:31.580635 systemd[1]: cri-containerd-3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1.scope: Deactivated successfully. Dec 13 14:07:31.604548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1-rootfs.mount: Deactivated successfully. Dec 13 14:07:31.611778 env[1212]: time="2024-12-13T14:07:31.611732699Z" level=info msg="shim disconnected" id=3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1 Dec 13 14:07:31.612005 env[1212]: time="2024-12-13T14:07:31.611985900Z" level=warning msg="cleaning up after shim disconnected" id=3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1 namespace=k8s.io Dec 13 14:07:31.612079 env[1212]: time="2024-12-13T14:07:31.612066181Z" level=info msg="cleaning up dead shim" Dec 13 14:07:31.613181 env[1212]: time="2024-12-13T14:07:31.613133506Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:07:31.618357 env[1212]: time="2024-12-13T14:07:31.618318450Z" level=info msg="StopContainer for \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\" with timeout 2 (s)" Dec 13 14:07:31.618611 env[1212]: time="2024-12-13T14:07:31.618584492Z" level=info msg="Stop container \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\" with signal terminated" Dec 13 14:07:31.619960 env[1212]: time="2024-12-13T14:07:31.619929338Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3676 runtime=io.containerd.runc.v2\n" Dec 13 14:07:31.622289 env[1212]: time="2024-12-13T14:07:31.622252789Z" level=info msg="StopContainer for \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\" returns successfully" Dec 13 14:07:31.623053 env[1212]: time="2024-12-13T14:07:31.623014352Z" level=info msg="StopPodSandbox for \"c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c\"" Dec 13 14:07:31.623127 env[1212]: time="2024-12-13T14:07:31.623082073Z" level=info msg="Container to stop \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.624774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c-shm.mount: Deactivated successfully. Dec 13 14:07:31.627848 systemd-networkd[1032]: lxc_health: Link DOWN Dec 13 14:07:31.627859 systemd-networkd[1032]: lxc_health: Lost carrier Dec 13 14:07:31.632924 systemd[1]: cri-containerd-c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c.scope: Deactivated successfully. Dec 13 14:07:31.662459 env[1212]: time="2024-12-13T14:07:31.662159856Z" level=info msg="shim disconnected" id=c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c Dec 13 14:07:31.662459 env[1212]: time="2024-12-13T14:07:31.662462418Z" level=warning msg="cleaning up after shim disconnected" id=c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c namespace=k8s.io Dec 13 14:07:31.662665 env[1212]: time="2024-12-13T14:07:31.662473178Z" level=info msg="cleaning up dead shim" Dec 13 14:07:31.667570 systemd[1]: cri-containerd-3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2.scope: Deactivated successfully. Dec 13 14:07:31.667888 systemd[1]: cri-containerd-3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2.scope: Consumed 6.398s CPU time. Dec 13 14:07:31.670646 env[1212]: time="2024-12-13T14:07:31.670606176Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3717 runtime=io.containerd.runc.v2\n" Dec 13 14:07:31.670944 env[1212]: time="2024-12-13T14:07:31.670901657Z" level=info msg="TearDown network for sandbox \"c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c\" successfully" Dec 13 14:07:31.670944 env[1212]: time="2024-12-13T14:07:31.670932738Z" level=info msg="StopPodSandbox for \"c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c\" returns successfully" Dec 13 14:07:31.705360 env[1212]: time="2024-12-13T14:07:31.705306019Z" level=info msg="shim disconnected" id=3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2 Dec 13 14:07:31.705654 env[1212]: time="2024-12-13T14:07:31.705633181Z" level=warning msg="cleaning up after shim disconnected" id=3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2 namespace=k8s.io Dec 13 14:07:31.705723 env[1212]: time="2024-12-13T14:07:31.705710901Z" level=info msg="cleaning up dead shim" Dec 13 14:07:31.706553 kubelet[2012]: I1213 14:07:31.706519 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sspnk\" (UniqueName: \"kubernetes.io/projected/ba562170-4539-4094-8b09-8d65c240a38c-kube-api-access-sspnk\") pod \"ba562170-4539-4094-8b09-8d65c240a38c\" (UID: \"ba562170-4539-4094-8b09-8d65c240a38c\") " Dec 13 14:07:31.706822 kubelet[2012]: I1213 14:07:31.706565 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba562170-4539-4094-8b09-8d65c240a38c-cilium-config-path\") pod \"ba562170-4539-4094-8b09-8d65c240a38c\" (UID: \"ba562170-4539-4094-8b09-8d65c240a38c\") " Dec 13 14:07:31.710030 kubelet[2012]: I1213 14:07:31.709990 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba562170-4539-4094-8b09-8d65c240a38c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ba562170-4539-4094-8b09-8d65c240a38c" (UID: "ba562170-4539-4094-8b09-8d65c240a38c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:07:31.710476 kubelet[2012]: I1213 14:07:31.710443 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba562170-4539-4094-8b09-8d65c240a38c-kube-api-access-sspnk" (OuterVolumeSpecName: "kube-api-access-sspnk") pod "ba562170-4539-4094-8b09-8d65c240a38c" (UID: "ba562170-4539-4094-8b09-8d65c240a38c"). InnerVolumeSpecName "kube-api-access-sspnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:31.714340 env[1212]: time="2024-12-13T14:07:31.714301741Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3742 runtime=io.containerd.runc.v2\n" Dec 13 14:07:31.716795 env[1212]: time="2024-12-13T14:07:31.716757913Z" level=info msg="StopContainer for \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\" returns successfully" Dec 13 14:07:31.717475 env[1212]: time="2024-12-13T14:07:31.717416516Z" level=info msg="StopPodSandbox for \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\"" Dec 13 14:07:31.717541 env[1212]: time="2024-12-13T14:07:31.717481276Z" level=info msg="Container to stop \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.717541 env[1212]: time="2024-12-13T14:07:31.717495716Z" level=info msg="Container to stop \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.717541 env[1212]: time="2024-12-13T14:07:31.717506837Z" level=info msg="Container to stop \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.717541 env[1212]: time="2024-12-13T14:07:31.717519917Z" level=info msg="Container to stop \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.717541 env[1212]: time="2024-12-13T14:07:31.717530477Z" level=info msg="Container to stop \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:07:31.724402 systemd[1]: cri-containerd-0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a.scope: Deactivated successfully. Dec 13 14:07:31.742598 env[1212]: time="2024-12-13T14:07:31.742548874Z" level=info msg="shim disconnected" id=0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a Dec 13 14:07:31.742598 env[1212]: time="2024-12-13T14:07:31.742595274Z" level=warning msg="cleaning up after shim disconnected" id=0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a namespace=k8s.io Dec 13 14:07:31.742598 env[1212]: time="2024-12-13T14:07:31.742606075Z" level=info msg="cleaning up dead shim" Dec 13 14:07:31.749481 env[1212]: time="2024-12-13T14:07:31.749438347Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3773 runtime=io.containerd.runc.v2\n" Dec 13 14:07:31.749761 env[1212]: time="2024-12-13T14:07:31.749730068Z" level=info msg="TearDown network for sandbox \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" successfully" Dec 13 14:07:31.749797 env[1212]: time="2024-12-13T14:07:31.749760028Z" level=info msg="StopPodSandbox for \"0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a\" returns successfully" Dec 13 14:07:31.807740 kubelet[2012]: I1213 14:07:31.807699 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-xtables-lock\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.807740 kubelet[2012]: I1213 14:07:31.807743 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-lib-modules\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.807927 kubelet[2012]: I1213 14:07:31.807763 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-hostproc\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.807927 kubelet[2012]: I1213 14:07:31.807791 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-config-path\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.807927 kubelet[2012]: I1213 14:07:31.807810 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-run\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.807927 kubelet[2012]: I1213 14:07:31.807829 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-etc-cni-netd\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.807927 kubelet[2012]: I1213 14:07:31.807847 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-host-proc-sys-net\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.807927 kubelet[2012]: I1213 14:07:31.807839 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.808084 kubelet[2012]: I1213 14:07:31.807871 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-clustermesh-secrets\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.808084 kubelet[2012]: I1213 14:07:31.807893 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-bpf-maps\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.808084 kubelet[2012]: I1213 14:07:31.807911 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-cgroup\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.808084 kubelet[2012]: I1213 14:07:31.807909 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-hostproc" (OuterVolumeSpecName: "hostproc") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.808084 kubelet[2012]: I1213 14:07:31.807928 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-host-proc-sys-kernel\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.808084 kubelet[2012]: I1213 14:07:31.807946 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cni-path\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.810030 kubelet[2012]: I1213 14:07:31.807944 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.810030 kubelet[2012]: I1213 14:07:31.807967 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhfpv\" (UniqueName: \"kubernetes.io/projected/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-kube-api-access-vhfpv\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.810030 kubelet[2012]: I1213 14:07:31.807972 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.810030 kubelet[2012]: I1213 14:07:31.807988 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-hubble-tls\") pod \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\" (UID: \"6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94\") " Dec 13 14:07:31.810030 kubelet[2012]: I1213 14:07:31.807990 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.810160 kubelet[2012]: I1213 14:07:31.808011 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.810160 kubelet[2012]: I1213 14:07:31.808032 2012 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.810160 kubelet[2012]: I1213 14:07:31.808045 2012 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.810160 kubelet[2012]: I1213 14:07:31.808054 2012 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.810160 kubelet[2012]: I1213 14:07:31.808066 2012 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sspnk\" (UniqueName: \"kubernetes.io/projected/ba562170-4539-4094-8b09-8d65c240a38c-kube-api-access-sspnk\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.810160 kubelet[2012]: I1213 14:07:31.808076 2012 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.810160 kubelet[2012]: I1213 14:07:31.808085 2012 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.810378 kubelet[2012]: I1213 14:07:31.808096 2012 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba562170-4539-4094-8b09-8d65c240a38c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.810378 kubelet[2012]: I1213 14:07:31.807836 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.810378 kubelet[2012]: I1213 14:07:31.808129 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cni-path" (OuterVolumeSpecName: "cni-path") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.810378 kubelet[2012]: I1213 14:07:31.809772 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:07:31.810378 kubelet[2012]: I1213 14:07:31.809824 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.810610 kubelet[2012]: I1213 14:07:31.809849 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:31.810772 kubelet[2012]: I1213 14:07:31.810745 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-kube-api-access-vhfpv" (OuterVolumeSpecName: "kube-api-access-vhfpv") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "kube-api-access-vhfpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:31.810865 kubelet[2012]: I1213 14:07:31.810795 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:07:31.810935 kubelet[2012]: I1213 14:07:31.810802 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" (UID: "6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:31.909285 kubelet[2012]: I1213 14:07:31.909166 2012 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.909285 kubelet[2012]: I1213 14:07:31.909217 2012 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.909285 kubelet[2012]: I1213 14:07:31.909229 2012 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.909285 kubelet[2012]: I1213 14:07:31.909241 2012 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.909285 kubelet[2012]: I1213 14:07:31.909251 2012 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.909285 kubelet[2012]: I1213 14:07:31.909261 2012 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.909285 kubelet[2012]: I1213 14:07:31.909269 2012 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.910287 kubelet[2012]: I1213 14:07:31.910088 2012 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vhfpv\" (UniqueName: \"kubernetes.io/projected/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-kube-api-access-vhfpv\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:31.910598 kubelet[2012]: I1213 14:07:31.910576 2012 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:32.096418 kubelet[2012]: E1213 14:07:32.096386 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:32.272703 kubelet[2012]: I1213 14:07:32.272598 2012 scope.go:117] "RemoveContainer" containerID="3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1" Dec 13 14:07:32.274193 env[1212]: time="2024-12-13T14:07:32.274129139Z" level=info msg="RemoveContainer for \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\"" Dec 13 14:07:32.277344 systemd[1]: Removed slice kubepods-besteffort-podba562170_4539_4094_8b09_8d65c240a38c.slice. Dec 13 14:07:32.281240 systemd[1]: Removed slice kubepods-burstable-pod6bb3f3b6_584e_4df1_9d4c_aa3f843d9b94.slice. Dec 13 14:07:32.281317 systemd[1]: kubepods-burstable-pod6bb3f3b6_584e_4df1_9d4c_aa3f843d9b94.slice: Consumed 6.608s CPU time. Dec 13 14:07:32.281590 env[1212]: time="2024-12-13T14:07:32.281555093Z" level=info msg="RemoveContainer for \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\" returns successfully" Dec 13 14:07:32.281959 kubelet[2012]: I1213 14:07:32.281928 2012 scope.go:117] "RemoveContainer" containerID="3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1" Dec 13 14:07:32.282226 env[1212]: time="2024-12-13T14:07:32.282136656Z" level=error msg="ContainerStatus for \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\": not found" Dec 13 14:07:32.282381 kubelet[2012]: E1213 14:07:32.282363 2012 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\": not found" containerID="3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1" Dec 13 14:07:32.282999 kubelet[2012]: I1213 14:07:32.282973 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1"} err="failed to get container status \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"3571aed1dd0ec656f2f3a826e872bf412793e9b40a8e787203f35d11627161f1\": not found" Dec 13 14:07:32.283081 kubelet[2012]: I1213 14:07:32.283004 2012 scope.go:117] "RemoveContainer" containerID="3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2" Dec 13 14:07:32.284951 env[1212]: time="2024-12-13T14:07:32.284233865Z" level=info msg="RemoveContainer for \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\"" Dec 13 14:07:32.288927 env[1212]: time="2024-12-13T14:07:32.288696526Z" level=info msg="RemoveContainer for \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\" returns successfully" Dec 13 14:07:32.289956 kubelet[2012]: I1213 14:07:32.289921 2012 scope.go:117] "RemoveContainer" containerID="86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c" Dec 13 14:07:32.294260 env[1212]: time="2024-12-13T14:07:32.294206271Z" level=info msg="RemoveContainer for \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\"" Dec 13 14:07:32.297156 env[1212]: time="2024-12-13T14:07:32.296882603Z" level=info msg="RemoveContainer for \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\" returns successfully" Dec 13 14:07:32.297281 kubelet[2012]: I1213 14:07:32.297079 2012 scope.go:117] "RemoveContainer" containerID="88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b" Dec 13 14:07:32.299007 env[1212]: time="2024-12-13T14:07:32.298981773Z" level=info msg="RemoveContainer for \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\"" Dec 13 14:07:32.303476 env[1212]: time="2024-12-13T14:07:32.303432993Z" level=info msg="RemoveContainer for \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\" returns successfully" Dec 13 14:07:32.303631 kubelet[2012]: I1213 14:07:32.303610 2012 scope.go:117] "RemoveContainer" containerID="646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d" Dec 13 14:07:32.304697 env[1212]: time="2024-12-13T14:07:32.304670559Z" level=info msg="RemoveContainer for \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\"" Dec 13 14:07:32.307114 env[1212]: time="2024-12-13T14:07:32.307079570Z" level=info msg="RemoveContainer for \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\" returns successfully" Dec 13 14:07:32.307318 kubelet[2012]: I1213 14:07:32.307300 2012 scope.go:117] "RemoveContainer" containerID="7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23" Dec 13 14:07:32.308225 env[1212]: time="2024-12-13T14:07:32.308198855Z" level=info msg="RemoveContainer for \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\"" Dec 13 14:07:32.310453 env[1212]: time="2024-12-13T14:07:32.310415025Z" level=info msg="RemoveContainer for \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\" returns successfully" Dec 13 14:07:32.310618 kubelet[2012]: I1213 14:07:32.310599 2012 scope.go:117] "RemoveContainer" containerID="3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2" Dec 13 14:07:32.310992 env[1212]: time="2024-12-13T14:07:32.310931508Z" level=error msg="ContainerStatus for \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\": not found" Dec 13 14:07:32.311158 kubelet[2012]: E1213 14:07:32.311142 2012 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\": not found" containerID="3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2" Dec 13 14:07:32.311284 kubelet[2012]: I1213 14:07:32.311270 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2"} err="failed to get container status \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2\": not found" Dec 13 14:07:32.311360 kubelet[2012]: I1213 14:07:32.311349 2012 scope.go:117] "RemoveContainer" containerID="86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c" Dec 13 14:07:32.311669 env[1212]: time="2024-12-13T14:07:32.311588551Z" level=error msg="ContainerStatus for \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\": not found" Dec 13 14:07:32.311820 kubelet[2012]: E1213 14:07:32.311798 2012 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\": not found" containerID="86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c" Dec 13 14:07:32.311910 kubelet[2012]: I1213 14:07:32.311898 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c"} err="failed to get container status \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\": rpc error: code = NotFound desc = an error occurred when try to find container \"86ed22e3e391159e14ed7015a62b3270f4243cc44fb46dd9e520340c4d20259c\": not found" Dec 13 14:07:32.312019 kubelet[2012]: I1213 14:07:32.311968 2012 scope.go:117] "RemoveContainer" containerID="88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b" Dec 13 14:07:32.312378 env[1212]: time="2024-12-13T14:07:32.312291514Z" level=error msg="ContainerStatus for \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\": not found" Dec 13 14:07:32.312517 kubelet[2012]: E1213 14:07:32.312499 2012 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\": not found" containerID="88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b" Dec 13 14:07:32.312603 kubelet[2012]: I1213 14:07:32.312592 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b"} err="failed to get container status \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"88e467a57f58c335a0ea07c148a93664a6bfda6ecd39687fc9e1b915f96acd9b\": not found" Dec 13 14:07:32.312665 kubelet[2012]: I1213 14:07:32.312655 2012 scope.go:117] "RemoveContainer" containerID="646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d" Dec 13 14:07:32.313006 env[1212]: time="2024-12-13T14:07:32.312924437Z" level=error msg="ContainerStatus for \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\": not found" Dec 13 14:07:32.313100 kubelet[2012]: E1213 14:07:32.313086 2012 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\": not found" containerID="646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d" Dec 13 14:07:32.313139 kubelet[2012]: I1213 14:07:32.313113 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d"} err="failed to get container status \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\": rpc error: code = NotFound desc = an error occurred when try to find container \"646fb4c9e9be00f6b7c0e1048b77f696f5dd471c5ee5e0f26566e95d5cd3127d\": not found" Dec 13 14:07:32.313139 kubelet[2012]: I1213 14:07:32.313124 2012 scope.go:117] "RemoveContainer" containerID="7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23" Dec 13 14:07:32.313430 env[1212]: time="2024-12-13T14:07:32.313340959Z" level=error msg="ContainerStatus for \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\": not found" Dec 13 14:07:32.313510 kubelet[2012]: E1213 14:07:32.313490 2012 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\": not found" containerID="7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23" Dec 13 14:07:32.313546 kubelet[2012]: I1213 14:07:32.313524 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23"} err="failed to get container status \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e79e8d38ac43556db215c6cfe915ca9be0110f35953c25a1a75dc1f740aca23\": not found" Dec 13 14:07:32.575666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e1400fdce88d563782ac804bb0a2809b10fb27b94740ffb45926e2d1e6613b2-rootfs.mount: Deactivated successfully. Dec 13 14:07:32.575772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2562b04a95884604ee2e53d512f0249d3f7a0fb9ef50a02898dbc8c1e64f22c-rootfs.mount: Deactivated successfully. Dec 13 14:07:32.575842 systemd[1]: var-lib-kubelet-pods-ba562170\x2d4539\x2d4094\x2d8b09\x2d8d65c240a38c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsspnk.mount: Deactivated successfully. Dec 13 14:07:32.575897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a-rootfs.mount: Deactivated successfully. Dec 13 14:07:32.575944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e9912eda912f39e828cf5798921e211176a243f671969f816a68e7328851a6a-shm.mount: Deactivated successfully. Dec 13 14:07:32.575997 systemd[1]: var-lib-kubelet-pods-6bb3f3b6\x2d584e\x2d4df1\x2d9d4c\x2daa3f843d9b94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvhfpv.mount: Deactivated successfully. Dec 13 14:07:32.576053 systemd[1]: var-lib-kubelet-pods-6bb3f3b6\x2d584e\x2d4df1\x2d9d4c\x2daa3f843d9b94-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:07:32.576100 systemd[1]: var-lib-kubelet-pods-6bb3f3b6\x2d584e\x2d4df1\x2d9d4c\x2daa3f843d9b94-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:07:33.099089 kubelet[2012]: I1213 14:07:33.099048 2012 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" path="/var/lib/kubelet/pods/6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94/volumes" Dec 13 14:07:33.099664 kubelet[2012]: I1213 14:07:33.099634 2012 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ba562170-4539-4094-8b09-8d65c240a38c" path="/var/lib/kubelet/pods/ba562170-4539-4094-8b09-8d65c240a38c/volumes" Dec 13 14:07:33.524888 sshd[3626]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:33.527975 systemd[1]: sshd@21-10.0.0.68:22-10.0.0.1:60898.service: Deactivated successfully. Dec 13 14:07:33.528614 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:07:33.528770 systemd[1]: session-22.scope: Consumed 1.606s CPU time. Dec 13 14:07:33.529153 systemd-logind[1199]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:07:33.530419 systemd[1]: Started sshd@22-10.0.0.68:22-10.0.0.1:40118.service. Dec 13 14:07:33.531337 systemd-logind[1199]: Removed session 22. Dec 13 14:07:33.569032 sshd[3790]: Accepted publickey for core from 10.0.0.1 port 40118 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:33.570290 sshd[3790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:33.573653 systemd-logind[1199]: New session 23 of user core. Dec 13 14:07:33.574515 systemd[1]: Started session-23.scope. Dec 13 14:07:34.668674 sshd[3790]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:34.672473 systemd[1]: Started sshd@23-10.0.0.68:22-10.0.0.1:40124.service. Dec 13 14:07:34.676567 systemd[1]: sshd@22-10.0.0.68:22-10.0.0.1:40118.service: Deactivated successfully. Dec 13 14:07:34.677233 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:07:34.677393 systemd[1]: session-23.scope: Consumed 1.020s CPU time. Dec 13 14:07:34.680518 systemd-logind[1199]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:07:34.681363 systemd-logind[1199]: Removed session 23. Dec 13 14:07:34.685397 kubelet[2012]: I1213 14:07:34.685361 2012 topology_manager.go:215] "Topology Admit Handler" podUID="24f37a67-de22-442a-9d49-5de5a4887f94" podNamespace="kube-system" podName="cilium-vnh8s" Dec 13 14:07:34.685679 kubelet[2012]: E1213 14:07:34.685417 2012 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" containerName="mount-cgroup" Dec 13 14:07:34.685679 kubelet[2012]: E1213 14:07:34.685430 2012 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" containerName="mount-bpf-fs" Dec 13 14:07:34.685679 kubelet[2012]: E1213 14:07:34.685437 2012 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba562170-4539-4094-8b09-8d65c240a38c" containerName="cilium-operator" Dec 13 14:07:34.685679 kubelet[2012]: E1213 14:07:34.685444 2012 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" containerName="cilium-agent" Dec 13 14:07:34.685679 kubelet[2012]: E1213 14:07:34.685450 2012 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" containerName="apply-sysctl-overwrites" Dec 13 14:07:34.685679 kubelet[2012]: E1213 14:07:34.685458 2012 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" containerName="clean-cilium-state" Dec 13 14:07:34.685679 kubelet[2012]: I1213 14:07:34.685479 2012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba562170-4539-4094-8b09-8d65c240a38c" containerName="cilium-operator" Dec 13 14:07:34.685679 kubelet[2012]: I1213 14:07:34.685486 2012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb3f3b6-584e-4df1-9d4c-aa3f843d9b94" containerName="cilium-agent" Dec 13 14:07:34.692818 systemd[1]: Created slice kubepods-burstable-pod24f37a67_de22_442a_9d49_5de5a4887f94.slice. Dec 13 14:07:34.714417 sshd[3801]: Accepted publickey for core from 10.0.0.1 port 40124 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:34.716270 sshd[3801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:34.720448 systemd[1]: Started session-24.scope. Dec 13 14:07:34.720749 systemd-logind[1199]: New session 24 of user core. Dec 13 14:07:34.734332 kubelet[2012]: I1213 14:07:34.734290 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-bpf-maps\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734332 kubelet[2012]: I1213 14:07:34.734342 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24f37a67-de22-442a-9d49-5de5a4887f94-clustermesh-secrets\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734448 kubelet[2012]: I1213 14:07:34.734364 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-config-path\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734448 kubelet[2012]: I1213 14:07:34.734385 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-ipsec-secrets\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734448 kubelet[2012]: I1213 14:07:34.734405 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-host-proc-sys-net\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734448 kubelet[2012]: I1213 14:07:34.734426 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-hostproc\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734448 kubelet[2012]: I1213 14:07:34.734446 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cni-path\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734576 kubelet[2012]: I1213 14:07:34.734464 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24f37a67-de22-442a-9d49-5de5a4887f94-hubble-tls\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734576 kubelet[2012]: I1213 14:07:34.734487 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-cgroup\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734576 kubelet[2012]: I1213 14:07:34.734508 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-lib-modules\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734576 kubelet[2012]: I1213 14:07:34.734528 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twt65\" (UniqueName: \"kubernetes.io/projected/24f37a67-de22-442a-9d49-5de5a4887f94-kube-api-access-twt65\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734576 kubelet[2012]: I1213 14:07:34.734548 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-etc-cni-netd\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734576 kubelet[2012]: I1213 14:07:34.734567 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-xtables-lock\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734703 kubelet[2012]: I1213 14:07:34.734587 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-run\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.734703 kubelet[2012]: I1213 14:07:34.734608 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-host-proc-sys-kernel\") pod \"cilium-vnh8s\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " pod="kube-system/cilium-vnh8s" Dec 13 14:07:34.849644 sshd[3801]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:34.852616 systemd[1]: Started sshd@24-10.0.0.68:22-10.0.0.1:40128.service. Dec 13 14:07:34.859227 kubelet[2012]: E1213 14:07:34.859196 2012 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path kube-api-access-twt65], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-vnh8s" podUID="24f37a67-de22-442a-9d49-5de5a4887f94" Dec 13 14:07:34.861150 systemd[1]: sshd@23-10.0.0.68:22-10.0.0.1:40124.service: Deactivated successfully. Dec 13 14:07:34.861865 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:07:34.868285 systemd-logind[1199]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:07:34.869356 systemd-logind[1199]: Removed session 24. Dec 13 14:07:34.896427 sshd[3817]: Accepted publickey for core from 10.0.0.1 port 40128 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:07:34.898074 sshd[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:34.901243 systemd-logind[1199]: New session 25 of user core. Dec 13 14:07:34.902097 systemd[1]: Started session-25.scope. Dec 13 14:07:35.143623 kubelet[2012]: E1213 14:07:35.143585 2012 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:07:35.339212 kubelet[2012]: I1213 14:07:35.339157 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24f37a67-de22-442a-9d49-5de5a4887f94-clustermesh-secrets\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.339398 kubelet[2012]: I1213 14:07:35.339384 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-config-path\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.339473 kubelet[2012]: I1213 14:07:35.339462 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-ipsec-secrets\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.339541 kubelet[2012]: I1213 14:07:35.339530 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-etc-cni-netd\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.339691 kubelet[2012]: I1213 14:07:35.339678 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cni-path\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.339779 kubelet[2012]: I1213 14:07:35.339767 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-bpf-maps\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.339849 kubelet[2012]: I1213 14:07:35.339838 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24f37a67-de22-442a-9d49-5de5a4887f94-hubble-tls\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.339914 kubelet[2012]: I1213 14:07:35.339904 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-lib-modules\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.339981 kubelet[2012]: I1213 14:07:35.339972 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twt65\" (UniqueName: \"kubernetes.io/projected/24f37a67-de22-442a-9d49-5de5a4887f94-kube-api-access-twt65\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.340049 kubelet[2012]: I1213 14:07:35.340040 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-hostproc\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.340125 kubelet[2012]: I1213 14:07:35.340115 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-run\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.340213 kubelet[2012]: I1213 14:07:35.340202 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-host-proc-sys-kernel\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.340287 kubelet[2012]: I1213 14:07:35.340277 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-host-proc-sys-net\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.340370 kubelet[2012]: I1213 14:07:35.340358 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-cgroup\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.340452 kubelet[2012]: I1213 14:07:35.340442 2012 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-xtables-lock\") pod \"24f37a67-de22-442a-9d49-5de5a4887f94\" (UID: \"24f37a67-de22-442a-9d49-5de5a4887f94\") " Dec 13 14:07:35.340575 kubelet[2012]: I1213 14:07:35.340559 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.340655 kubelet[2012]: I1213 14:07:35.340641 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.340724 kubelet[2012]: I1213 14:07:35.340711 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cni-path" (OuterVolumeSpecName: "cni-path") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.340790 kubelet[2012]: I1213 14:07:35.340777 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.341204 kubelet[2012]: I1213 14:07:35.341149 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:07:35.341279 kubelet[2012]: I1213 14:07:35.341218 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.341279 kubelet[2012]: I1213 14:07:35.341238 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.343354 systemd[1]: var-lib-kubelet-pods-24f37a67\x2dde22\x2d442a\x2d9d49\x2d5de5a4887f94-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:07:35.343448 systemd[1]: var-lib-kubelet-pods-24f37a67\x2dde22\x2d442a\x2d9d49\x2d5de5a4887f94-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:07:35.343518 kubelet[2012]: I1213 14:07:35.343374 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f37a67-de22-442a-9d49-5de5a4887f94-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:07:35.343518 kubelet[2012]: I1213 14:07:35.343413 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-hostproc" (OuterVolumeSpecName: "hostproc") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.343518 kubelet[2012]: I1213 14:07:35.343434 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.343518 kubelet[2012]: I1213 14:07:35.343450 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.343518 kubelet[2012]: I1213 14:07:35.343472 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:07:35.345290 kubelet[2012]: I1213 14:07:35.345264 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f37a67-de22-442a-9d49-5de5a4887f94-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:35.345665 kubelet[2012]: I1213 14:07:35.345623 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:07:35.345801 kubelet[2012]: I1213 14:07:35.345765 2012 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f37a67-de22-442a-9d49-5de5a4887f94-kube-api-access-twt65" (OuterVolumeSpecName: "kube-api-access-twt65") pod "24f37a67-de22-442a-9d49-5de5a4887f94" (UID: "24f37a67-de22-442a-9d49-5de5a4887f94"). InnerVolumeSpecName "kube-api-access-twt65". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:07:35.440857 kubelet[2012]: I1213 14:07:35.440763 2012 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24f37a67-de22-442a-9d49-5de5a4887f94-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.440857 kubelet[2012]: I1213 14:07:35.440795 2012 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.440857 kubelet[2012]: I1213 14:07:35.440809 2012 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-twt65\" (UniqueName: \"kubernetes.io/projected/24f37a67-de22-442a-9d49-5de5a4887f94-kube-api-access-twt65\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.440857 kubelet[2012]: I1213 14:07:35.440821 2012 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.440857 kubelet[2012]: I1213 14:07:35.440830 2012 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.440857 kubelet[2012]: I1213 14:07:35.440841 2012 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.440857 kubelet[2012]: I1213 14:07:35.440850 2012 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.441121 kubelet[2012]: I1213 14:07:35.440878 2012 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.441121 kubelet[2012]: I1213 14:07:35.440889 2012 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.441121 kubelet[2012]: I1213 14:07:35.440898 2012 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24f37a67-de22-442a-9d49-5de5a4887f94-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.441121 kubelet[2012]: I1213 14:07:35.440909 2012 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.441121 kubelet[2012]: I1213 14:07:35.440919 2012 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/24f37a67-de22-442a-9d49-5de5a4887f94-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.441121 kubelet[2012]: I1213 14:07:35.440927 2012 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.441121 kubelet[2012]: I1213 14:07:35.440936 2012 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.441121 kubelet[2012]: I1213 14:07:35.440945 2012 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24f37a67-de22-442a-9d49-5de5a4887f94-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:07:35.839800 systemd[1]: var-lib-kubelet-pods-24f37a67\x2dde22\x2d442a\x2d9d49\x2d5de5a4887f94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtwt65.mount: Deactivated successfully. Dec 13 14:07:35.839901 systemd[1]: var-lib-kubelet-pods-24f37a67\x2dde22\x2d442a\x2d9d49\x2d5de5a4887f94-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:07:36.097021 kubelet[2012]: E1213 14:07:36.096919 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:36.286807 systemd[1]: Removed slice kubepods-burstable-pod24f37a67_de22_442a_9d49_5de5a4887f94.slice. Dec 13 14:07:36.315202 kubelet[2012]: I1213 14:07:36.315157 2012 topology_manager.go:215] "Topology Admit Handler" podUID="574467d0-09bd-4c9f-beb0-e3cb749a8c9b" podNamespace="kube-system" podName="cilium-pt7xg" Dec 13 14:07:36.321146 systemd[1]: Created slice kubepods-burstable-pod574467d0_09bd_4c9f_beb0_e3cb749a8c9b.slice. Dec 13 14:07:36.345301 kubelet[2012]: I1213 14:07:36.345269 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-etc-cni-netd\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345435 kubelet[2012]: I1213 14:07:36.345313 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-cilium-ipsec-secrets\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345435 kubelet[2012]: I1213 14:07:36.345343 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-host-proc-sys-kernel\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345435 kubelet[2012]: I1213 14:07:36.345430 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-cilium-run\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345504 kubelet[2012]: I1213 14:07:36.345456 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-hubble-tls\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345504 kubelet[2012]: I1213 14:07:36.345477 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvvxw\" (UniqueName: \"kubernetes.io/projected/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-kube-api-access-fvvxw\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345504 kubelet[2012]: I1213 14:07:36.345497 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-hostproc\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345572 kubelet[2012]: I1213 14:07:36.345516 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-host-proc-sys-net\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345572 kubelet[2012]: I1213 14:07:36.345535 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-bpf-maps\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345572 kubelet[2012]: I1213 14:07:36.345554 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-lib-modules\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345638 kubelet[2012]: I1213 14:07:36.345574 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-cilium-config-path\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345638 kubelet[2012]: I1213 14:07:36.345593 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-xtables-lock\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345638 kubelet[2012]: I1213 14:07:36.345612 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-clustermesh-secrets\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345638 kubelet[2012]: I1213 14:07:36.345631 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-cilium-cgroup\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.345723 kubelet[2012]: I1213 14:07:36.345648 2012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/574467d0-09bd-4c9f-beb0-e3cb749a8c9b-cni-path\") pod \"cilium-pt7xg\" (UID: \"574467d0-09bd-4c9f-beb0-e3cb749a8c9b\") " pod="kube-system/cilium-pt7xg" Dec 13 14:07:36.543466 kubelet[2012]: I1213 14:07:36.543359 2012 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:07:36Z","lastTransitionTime":"2024-12-13T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:07:36.623914 kubelet[2012]: E1213 14:07:36.623884 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:36.624967 env[1212]: time="2024-12-13T14:07:36.624585771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pt7xg,Uid:574467d0-09bd-4c9f-beb0-e3cb749a8c9b,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:36.635893 env[1212]: time="2024-12-13T14:07:36.635823897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:36.635893 env[1212]: time="2024-12-13T14:07:36.635861857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:36.635893 env[1212]: time="2024-12-13T14:07:36.635871817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:36.636291 env[1212]: time="2024-12-13T14:07:36.636252219Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f pid=3848 runtime=io.containerd.runc.v2 Dec 13 14:07:36.646214 systemd[1]: Started cri-containerd-566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f.scope. Dec 13 14:07:36.675134 env[1212]: time="2024-12-13T14:07:36.675090499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pt7xg,Uid:574467d0-09bd-4c9f-beb0-e3cb749a8c9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\"" Dec 13 14:07:36.675678 kubelet[2012]: E1213 14:07:36.675657 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:36.678810 env[1212]: time="2024-12-13T14:07:36.678751794Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:07:36.687658 env[1212]: time="2024-12-13T14:07:36.687615630Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"29349b990778ea596ce356a11639666b60a9eab44326fe51de3413a6a7ffd0a3\"" Dec 13 14:07:36.688197 env[1212]: time="2024-12-13T14:07:36.688150872Z" level=info msg="StartContainer for \"29349b990778ea596ce356a11639666b60a9eab44326fe51de3413a6a7ffd0a3\"" Dec 13 14:07:36.702050 systemd[1]: Started cri-containerd-29349b990778ea596ce356a11639666b60a9eab44326fe51de3413a6a7ffd0a3.scope. Dec 13 14:07:36.729961 env[1212]: time="2024-12-13T14:07:36.729900924Z" level=info msg="StartContainer for \"29349b990778ea596ce356a11639666b60a9eab44326fe51de3413a6a7ffd0a3\" returns successfully" Dec 13 14:07:36.739042 systemd[1]: cri-containerd-29349b990778ea596ce356a11639666b60a9eab44326fe51de3413a6a7ffd0a3.scope: Deactivated successfully. Dec 13 14:07:36.764037 env[1212]: time="2024-12-13T14:07:36.763993224Z" level=info msg="shim disconnected" id=29349b990778ea596ce356a11639666b60a9eab44326fe51de3413a6a7ffd0a3 Dec 13 14:07:36.764283 env[1212]: time="2024-12-13T14:07:36.764262625Z" level=warning msg="cleaning up after shim disconnected" id=29349b990778ea596ce356a11639666b60a9eab44326fe51de3413a6a7ffd0a3 namespace=k8s.io Dec 13 14:07:36.764365 env[1212]: time="2024-12-13T14:07:36.764350266Z" level=info msg="cleaning up dead shim" Dec 13 14:07:36.770920 env[1212]: time="2024-12-13T14:07:36.770887733Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3934 runtime=io.containerd.runc.v2\n" Dec 13 14:07:37.098534 kubelet[2012]: I1213 14:07:37.098491 2012 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="24f37a67-de22-442a-9d49-5de5a4887f94" path="/var/lib/kubelet/pods/24f37a67-de22-442a-9d49-5de5a4887f94/volumes" Dec 13 14:07:37.285911 kubelet[2012]: E1213 14:07:37.285751 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:37.288529 env[1212]: time="2024-12-13T14:07:37.288139072Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:07:37.297468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491211677.mount: Deactivated successfully. Dec 13 14:07:37.304227 env[1212]: time="2024-12-13T14:07:37.304166896Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2553b0656f7066b92a8579161099d654b6809d59d274b8ae9293f6c21cc036fa\"" Dec 13 14:07:37.305014 env[1212]: time="2024-12-13T14:07:37.304983779Z" level=info msg="StartContainer for \"2553b0656f7066b92a8579161099d654b6809d59d274b8ae9293f6c21cc036fa\"" Dec 13 14:07:37.321820 systemd[1]: Started cri-containerd-2553b0656f7066b92a8579161099d654b6809d59d274b8ae9293f6c21cc036fa.scope. Dec 13 14:07:37.350943 env[1212]: time="2024-12-13T14:07:37.350834083Z" level=info msg="StartContainer for \"2553b0656f7066b92a8579161099d654b6809d59d274b8ae9293f6c21cc036fa\" returns successfully" Dec 13 14:07:37.358984 systemd[1]: cri-containerd-2553b0656f7066b92a8579161099d654b6809d59d274b8ae9293f6c21cc036fa.scope: Deactivated successfully. Dec 13 14:07:37.399665 env[1212]: time="2024-12-13T14:07:37.399612039Z" level=info msg="shim disconnected" id=2553b0656f7066b92a8579161099d654b6809d59d274b8ae9293f6c21cc036fa Dec 13 14:07:37.399665 env[1212]: time="2024-12-13T14:07:37.399657439Z" level=warning msg="cleaning up after shim disconnected" id=2553b0656f7066b92a8579161099d654b6809d59d274b8ae9293f6c21cc036fa namespace=k8s.io Dec 13 14:07:37.399665 env[1212]: time="2024-12-13T14:07:37.399665719Z" level=info msg="cleaning up dead shim" Dec 13 14:07:37.405975 env[1212]: time="2024-12-13T14:07:37.405939624Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" Dec 13 14:07:37.840020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2553b0656f7066b92a8579161099d654b6809d59d274b8ae9293f6c21cc036fa-rootfs.mount: Deactivated successfully. Dec 13 14:07:38.289546 kubelet[2012]: E1213 14:07:38.289443 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:38.291289 env[1212]: time="2024-12-13T14:07:38.291241383Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:07:38.306737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209146513.mount: Deactivated successfully. Dec 13 14:07:38.312213 env[1212]: time="2024-12-13T14:07:38.311292501Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"150af47c94eb50588fa928794571f814d884cb9ccbb4a67b501878774062f54f\"" Dec 13 14:07:38.313728 env[1212]: time="2024-12-13T14:07:38.313697711Z" level=info msg="StartContainer for \"150af47c94eb50588fa928794571f814d884cb9ccbb4a67b501878774062f54f\"" Dec 13 14:07:38.330974 systemd[1]: Started cri-containerd-150af47c94eb50588fa928794571f814d884cb9ccbb4a67b501878774062f54f.scope. Dec 13 14:07:38.364208 env[1212]: time="2024-12-13T14:07:38.363312584Z" level=info msg="StartContainer for \"150af47c94eb50588fa928794571f814d884cb9ccbb4a67b501878774062f54f\" returns successfully" Dec 13 14:07:38.365779 systemd[1]: cri-containerd-150af47c94eb50588fa928794571f814d884cb9ccbb4a67b501878774062f54f.scope: Deactivated successfully. Dec 13 14:07:38.386790 env[1212]: time="2024-12-13T14:07:38.386744716Z" level=info msg="shim disconnected" id=150af47c94eb50588fa928794571f814d884cb9ccbb4a67b501878774062f54f Dec 13 14:07:38.386790 env[1212]: time="2024-12-13T14:07:38.386789836Z" level=warning msg="cleaning up after shim disconnected" id=150af47c94eb50588fa928794571f814d884cb9ccbb4a67b501878774062f54f namespace=k8s.io Dec 13 14:07:38.387030 env[1212]: time="2024-12-13T14:07:38.386800956Z" level=info msg="cleaning up dead shim" Dec 13 14:07:38.392916 env[1212]: time="2024-12-13T14:07:38.392875700Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4051 runtime=io.containerd.runc.v2\n" Dec 13 14:07:38.840092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-150af47c94eb50588fa928794571f814d884cb9ccbb4a67b501878774062f54f-rootfs.mount: Deactivated successfully. Dec 13 14:07:39.293558 kubelet[2012]: E1213 14:07:39.293448 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:39.296343 env[1212]: time="2024-12-13T14:07:39.296299399Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:07:39.305911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006667879.mount: Deactivated successfully. Dec 13 14:07:39.307301 env[1212]: time="2024-12-13T14:07:39.307256320Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036\"" Dec 13 14:07:39.307887 env[1212]: time="2024-12-13T14:07:39.307863003Z" level=info msg="StartContainer for \"5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036\"" Dec 13 14:07:39.351617 systemd[1]: Started cri-containerd-5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036.scope. Dec 13 14:07:39.379193 env[1212]: time="2024-12-13T14:07:39.379139714Z" level=info msg="StartContainer for \"5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036\" returns successfully" Dec 13 14:07:39.380406 systemd[1]: cri-containerd-5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036.scope: Deactivated successfully. Dec 13 14:07:39.399825 env[1212]: time="2024-12-13T14:07:39.399781952Z" level=info msg="shim disconnected" id=5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036 Dec 13 14:07:39.399992 env[1212]: time="2024-12-13T14:07:39.399828193Z" level=warning msg="cleaning up after shim disconnected" id=5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036 namespace=k8s.io Dec 13 14:07:39.399992 env[1212]: time="2024-12-13T14:07:39.399838873Z" level=info msg="cleaning up dead shim" Dec 13 14:07:39.405985 env[1212]: time="2024-12-13T14:07:39.405950936Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:07:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4104 runtime=io.containerd.runc.v2\n" Dec 13 14:07:39.840151 systemd[1]: run-containerd-runc-k8s.io-5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036-runc.Ymhv66.mount: Deactivated successfully. Dec 13 14:07:39.840274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b94ed068c72184c4cca0cb522fb93aaeaf1ea6f8cd507d0900ac77cb61a8036-rootfs.mount: Deactivated successfully. Dec 13 14:07:40.145197 kubelet[2012]: E1213 14:07:40.145082 2012 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:07:40.296196 kubelet[2012]: E1213 14:07:40.296145 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:40.298577 env[1212]: time="2024-12-13T14:07:40.298536384Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:07:40.309749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283724321.mount: Deactivated successfully. Dec 13 14:07:40.312960 env[1212]: time="2024-12-13T14:07:40.312913517Z" level=info msg="CreateContainer within sandbox \"566cf27ffaf1a4aca0ffac35e1145861a1602711376de9f85452f93746924b2f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"119882ca174f66c996e06195a8f97ae08f87804fa93ae0539ff7c2c281cd7cdb\"" Dec 13 14:07:40.313924 env[1212]: time="2024-12-13T14:07:40.313895601Z" level=info msg="StartContainer for \"119882ca174f66c996e06195a8f97ae08f87804fa93ae0539ff7c2c281cd7cdb\"" Dec 13 14:07:40.338064 systemd[1]: Started cri-containerd-119882ca174f66c996e06195a8f97ae08f87804fa93ae0539ff7c2c281cd7cdb.scope. Dec 13 14:07:40.371874 env[1212]: time="2024-12-13T14:07:40.371727776Z" level=info msg="StartContainer for \"119882ca174f66c996e06195a8f97ae08f87804fa93ae0539ff7c2c281cd7cdb\" returns successfully" Dec 13 14:07:40.637201 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:07:41.300801 kubelet[2012]: E1213 14:07:41.300764 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:42.625821 kubelet[2012]: E1213 14:07:42.625782 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:43.155705 systemd[1]: run-containerd-runc-k8s.io-119882ca174f66c996e06195a8f97ae08f87804fa93ae0539ff7c2c281cd7cdb-runc.HHZppW.mount: Deactivated successfully. Dec 13 14:07:43.368526 systemd-networkd[1032]: lxc_health: Link UP Dec 13 14:07:43.377197 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:07:43.377162 systemd-networkd[1032]: lxc_health: Gained carrier Dec 13 14:07:44.626406 kubelet[2012]: E1213 14:07:44.626371 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:44.640579 kubelet[2012]: I1213 14:07:44.640540 2012 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pt7xg" podStartSLOduration=8.640501827 podStartE2EDuration="8.640501827s" podCreationTimestamp="2024-12-13 14:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:07:41.314167801 +0000 UTC m=+86.375282424" watchObservedRunningTime="2024-12-13 14:07:44.640501827 +0000 UTC m=+89.701616410" Dec 13 14:07:44.836307 systemd-networkd[1032]: lxc_health: Gained IPv6LL Dec 13 14:07:45.307578 kubelet[2012]: E1213 14:07:45.307543 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:47.097192 kubelet[2012]: E1213 14:07:47.097135 2012 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:07:47.397162 systemd[1]: run-containerd-runc-k8s.io-119882ca174f66c996e06195a8f97ae08f87804fa93ae0539ff7c2c281cd7cdb-runc.IMSF2c.mount: Deactivated successfully. Dec 13 14:07:49.569496 sshd[3817]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:49.571778 systemd[1]: sshd@24-10.0.0.68:22-10.0.0.1:40128.service: Deactivated successfully. Dec 13 14:07:49.572615 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:07:49.573137 systemd-logind[1199]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:07:49.573750 systemd-logind[1199]: Removed session 25.