Dec 13 14:05:23.096863 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:05:23.096886 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:05:23.096893 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 14:05:23.096900 kernel: printk: bootconsole [pl11] enabled Dec 13 14:05:23.096905 kernel: efi: EFI v2.70 by EDK II Dec 13 14:05:23.096911 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Dec 13 14:05:23.096917 kernel: random: crng init done Dec 13 14:05:23.096923 kernel: ACPI: Early table checksum verification disabled Dec 13 14:05:23.096928 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 14:05:23.096934 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096939 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096945 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 14:05:23.096952 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096957 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096964 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096970 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096975 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096982 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096988 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 14:05:23.096994 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.096999 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 14:05:23.097005 kernel: NUMA: Failed to initialise from firmware Dec 13 14:05:23.097011 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 14:05:23.097016 kernel: NUMA: NODE_DATA [mem 0x1bf7f4900-0x1bf7f9fff] Dec 13 14:05:23.097022 kernel: Zone ranges: Dec 13 14:05:23.097027 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 14:05:23.097033 kernel: DMA32 empty Dec 13 14:05:23.097039 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 14:05:23.097045 kernel: Movable zone start for each node Dec 13 14:05:23.097051 kernel: Early memory node ranges Dec 13 14:05:23.097057 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 14:05:23.097062 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 14:05:23.097068 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 14:05:23.097074 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 14:05:23.097079 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 14:05:23.097085 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 14:05:23.097091 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 14:05:23.097096 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 14:05:23.097102 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 14:05:23.097108 kernel: psci: probing for conduit method from ACPI. Dec 13 14:05:23.097117 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:05:23.097123 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:05:23.097129 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 14:05:23.097135 kernel: psci: SMC Calling Convention v1.4 Dec 13 14:05:23.097141 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Dec 13 14:05:23.097149 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Dec 13 14:05:23.097155 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:05:23.097160 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:05:23.097167 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:05:23.097173 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:05:23.097179 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:05:23.097185 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:05:23.097191 kernel: CPU features: detected: Spectre-BHB Dec 13 14:05:23.097197 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:05:23.097203 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:05:23.097210 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:05:23.097217 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 14:05:23.097223 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:05:23.097229 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 14:05:23.097235 kernel: Policy zone: Normal Dec 13 14:05:23.097243 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:05:23.097250 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:05:23.097256 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:05:23.097262 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:05:23.097268 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:05:23.097274 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Dec 13 14:05:23.097281 kernel: Memory: 3986948K/4194160K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 207212K reserved, 0K cma-reserved) Dec 13 14:05:23.097288 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:05:23.097294 kernel: trace event string verifier disabled Dec 13 14:05:23.097312 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:05:23.097321 kernel: rcu: RCU event tracing is enabled. Dec 13 14:05:23.097327 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:05:23.097334 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:05:23.097340 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:05:23.097346 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:05:23.097352 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:05:23.097358 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:05:23.097364 kernel: GICv3: 960 SPIs implemented Dec 13 14:05:23.097372 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:05:23.097378 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:05:23.097384 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:05:23.097390 kernel: GICv3: 16 PPIs implemented Dec 13 14:05:23.097396 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 14:05:23.097402 kernel: ITS: No ITS available, not enabling LPIs Dec 13 14:05:23.097408 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:23.097414 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:05:23.097420 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:05:23.097426 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:05:23.097432 kernel: Console: colour dummy device 80x25 Dec 13 14:05:23.097440 kernel: printk: console [tty1] enabled Dec 13 14:05:23.097447 kernel: ACPI: Core revision 20210730 Dec 13 14:05:23.097453 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:05:23.097459 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:05:23.097465 kernel: LSM: Security Framework initializing Dec 13 14:05:23.097472 kernel: SELinux: Initializing. Dec 13 14:05:23.097478 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:05:23.097484 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:05:23.097490 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 14:05:23.097498 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 14:05:23.097504 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:05:23.097510 kernel: Remapping and enabling EFI services. Dec 13 14:05:23.097516 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:05:23.097522 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:05:23.097528 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 14:05:23.097535 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:23.097541 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:05:23.097548 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:05:23.097554 kernel: SMP: Total of 2 processors activated. Dec 13 14:05:23.097561 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:05:23.097568 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 14:05:23.097574 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:05:23.097581 kernel: CPU features: detected: CRC32 instructions Dec 13 14:05:23.097587 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:05:23.097593 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:05:23.097599 kernel: CPU features: detected: Privileged Access Never Dec 13 14:05:23.097606 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:05:23.097612 kernel: alternatives: patching kernel code Dec 13 14:05:23.097620 kernel: devtmpfs: initialized Dec 13 14:05:23.097631 kernel: KASLR enabled Dec 13 14:05:23.097638 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:05:23.097646 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:05:23.097652 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:05:23.097659 kernel: SMBIOS 3.1.0 present. Dec 13 14:05:23.097666 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 14:05:23.097672 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:05:23.097679 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:05:23.097687 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:05:23.097694 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:05:23.097701 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:05:23.097708 kernel: audit: type=2000 audit(0.090:1): state=initialized audit_enabled=0 res=1 Dec 13 14:05:23.097714 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:05:23.097721 kernel: cpuidle: using governor menu Dec 13 14:05:23.097727 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:05:23.097735 kernel: ASID allocator initialised with 32768 entries Dec 13 14:05:23.097742 kernel: ACPI: bus type PCI registered Dec 13 14:05:23.097748 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:05:23.097755 kernel: Serial: AMBA PL011 UART driver Dec 13 14:05:23.097762 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:05:23.097768 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:05:23.097775 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:05:23.097782 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:05:23.097788 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:05:23.097796 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:05:23.097803 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:05:23.097810 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:05:23.097816 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:05:23.097823 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:05:23.097829 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:05:23.097836 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:05:23.097842 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:05:23.097849 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:05:23.097857 kernel: ACPI: Interpreter enabled Dec 13 14:05:23.097863 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:05:23.097870 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:05:23.097876 kernel: printk: console [ttyAMA0] enabled Dec 13 14:05:23.097883 kernel: printk: bootconsole [pl11] disabled Dec 13 14:05:23.097890 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 14:05:23.097896 kernel: iommu: Default domain type: Translated Dec 13 14:05:23.097903 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:05:23.097910 kernel: vgaarb: loaded Dec 13 14:05:23.097916 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:05:23.097924 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:05:23.097931 kernel: PTP clock support registered Dec 13 14:05:23.097937 kernel: Registered efivars operations Dec 13 14:05:23.097944 kernel: No ACPI PMU IRQ for CPU0 Dec 13 14:05:23.097950 kernel: No ACPI PMU IRQ for CPU1 Dec 13 14:05:23.097957 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:05:23.097964 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:05:23.097971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:05:23.097979 kernel: pnp: PnP ACPI init Dec 13 14:05:23.097985 kernel: pnp: PnP ACPI: found 0 devices Dec 13 14:05:23.097992 kernel: NET: Registered PF_INET protocol family Dec 13 14:05:23.097998 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:05:23.098005 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:05:23.098012 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:05:23.098019 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:05:23.098025 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:05:23.098032 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:05:23.098040 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:05:23.098047 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:05:23.098053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:05:23.098060 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:05:23.098066 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 14:05:23.098073 kernel: kvm [1]: HYP mode not available Dec 13 14:05:23.098079 kernel: Initialise system trusted keyrings Dec 13 14:05:23.098086 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:05:23.098092 kernel: Key type asymmetric registered Dec 13 14:05:23.098100 kernel: Asymmetric key parser 'x509' registered Dec 13 14:05:23.098107 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:05:23.098114 kernel: io scheduler mq-deadline registered Dec 13 14:05:23.098120 kernel: io scheduler kyber registered Dec 13 14:05:23.098127 kernel: io scheduler bfq registered Dec 13 14:05:23.098134 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:05:23.098140 kernel: thunder_xcv, ver 1.0 Dec 13 14:05:23.098147 kernel: thunder_bgx, ver 1.0 Dec 13 14:05:23.098153 kernel: nicpf, ver 1.0 Dec 13 14:05:23.098160 kernel: nicvf, ver 1.0 Dec 13 14:05:23.098280 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:05:23.098355 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:05:22 UTC (1734098722) Dec 13 14:05:23.098366 kernel: efifb: probing for efifb Dec 13 14:05:23.098373 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:05:23.098379 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:05:23.098386 kernel: efifb: scrolling: redraw Dec 13 14:05:23.098393 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:05:23.098402 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:05:23.098409 kernel: fb0: EFI VGA frame buffer device Dec 13 14:05:23.098416 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 14:05:23.098422 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:05:23.098429 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:05:23.098435 kernel: Segment Routing with IPv6 Dec 13 14:05:23.098442 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:05:23.098448 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:05:23.098455 kernel: Key type dns_resolver registered Dec 13 14:05:23.098461 kernel: registered taskstats version 1 Dec 13 14:05:23.098469 kernel: Loading compiled-in X.509 certificates Dec 13 14:05:23.098476 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:05:23.098483 kernel: Key type .fscrypt registered Dec 13 14:05:23.098489 kernel: Key type fscrypt-provisioning registered Dec 13 14:05:23.098496 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:05:23.098503 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:05:23.098509 kernel: ima: No architecture policies found Dec 13 14:05:23.098516 kernel: clk: Disabling unused clocks Dec 13 14:05:23.098524 kernel: Freeing unused kernel memory: 36416K Dec 13 14:05:23.098531 kernel: Run /init as init process Dec 13 14:05:23.098537 kernel: with arguments: Dec 13 14:05:23.098544 kernel: /init Dec 13 14:05:23.098550 kernel: with environment: Dec 13 14:05:23.098557 kernel: HOME=/ Dec 13 14:05:23.098563 kernel: TERM=linux Dec 13 14:05:23.098569 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:05:23.098578 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:05:23.098589 systemd[1]: Detected virtualization microsoft. Dec 13 14:05:23.098596 systemd[1]: Detected architecture arm64. Dec 13 14:05:23.098603 systemd[1]: Running in initrd. Dec 13 14:05:23.098610 systemd[1]: No hostname configured, using default hostname. Dec 13 14:05:23.098617 systemd[1]: Hostname set to . Dec 13 14:05:23.098625 systemd[1]: Initializing machine ID from random generator. Dec 13 14:05:23.098632 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:05:23.098640 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:05:23.098647 systemd[1]: Reached target cryptsetup.target. Dec 13 14:05:23.098654 systemd[1]: Reached target paths.target. Dec 13 14:05:23.098661 systemd[1]: Reached target slices.target. Dec 13 14:05:23.098668 systemd[1]: Reached target swap.target. Dec 13 14:05:23.098675 systemd[1]: Reached target timers.target. Dec 13 14:05:23.098682 systemd[1]: Listening on iscsid.socket. Dec 13 14:05:23.098690 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:05:23.098698 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:05:23.098705 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:05:23.098712 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:05:23.098719 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:05:23.098727 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:05:23.098734 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:05:23.098741 systemd[1]: Reached target sockets.target. Dec 13 14:05:23.098748 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:05:23.098755 systemd[1]: Finished network-cleanup.service. Dec 13 14:05:23.098763 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:05:23.098771 systemd[1]: Starting systemd-journald.service... Dec 13 14:05:23.098778 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:05:23.098785 systemd[1]: Starting systemd-resolved.service... Dec 13 14:05:23.098797 systemd-journald[276]: Journal started Dec 13 14:05:23.098837 systemd-journald[276]: Runtime Journal (/run/log/journal/379cd664dcee47a8a7f4250b37bcbb1c) is 8.0M, max 78.5M, 70.5M free. Dec 13 14:05:23.089394 systemd-modules-load[277]: Inserted module 'overlay' Dec 13 14:05:23.120904 systemd-resolved[278]: Positive Trust Anchors: Dec 13 14:05:23.154230 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:05:23.154255 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:05:23.154266 systemd[1]: Started systemd-journald.service. Dec 13 14:05:23.154275 kernel: Bridge firewalling registered Dec 13 14:05:23.120914 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:05:23.209585 kernel: audit: type=1130 audit(1734098723.171:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.209611 kernel: SCSI subsystem initialized Dec 13 14:05:23.209620 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:05:23.209628 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:05:23.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.120942 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:05:23.273746 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:05:23.273770 kernel: audit: type=1130 audit(1734098723.213:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.127761 systemd-resolved[278]: Defaulting to hostname 'linux'. Dec 13 14:05:23.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.158193 systemd-modules-load[277]: Inserted module 'br_netfilter' Dec 13 14:05:23.172114 systemd[1]: Started systemd-resolved.service. Dec 13 14:05:23.339361 kernel: audit: type=1130 audit(1734098723.278:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.339392 kernel: audit: type=1130 audit(1734098723.312:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.238185 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:05:23.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.273104 systemd-modules-load[277]: Inserted module 'dm_multipath' Dec 13 14:05:23.394422 kernel: audit: type=1130 audit(1734098723.339:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.394448 kernel: audit: type=1130 audit(1734098723.366:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.302915 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:05:23.312857 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:05:23.339751 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:05:23.367278 systemd[1]: Reached target nss-lookup.target. Dec 13 14:05:23.395042 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:05:23.408896 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:05:23.433743 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:05:23.445769 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:05:23.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.483335 kernel: audit: type=1130 audit(1734098723.450:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.451403 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:05:23.486558 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:05:23.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.515723 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:05:23.548056 kernel: audit: type=1130 audit(1734098723.485:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.548077 kernel: audit: type=1130 audit(1734098723.514:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.548957 dracut-cmdline[298]: dracut-dracut-053 Dec 13 14:05:23.553735 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:05:23.634325 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:05:23.653341 kernel: iscsi: registered transport (tcp) Dec 13 14:05:23.675130 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:05:23.675193 kernel: QLogic iSCSI HBA Driver Dec 13 14:05:23.705735 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:05:23.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.711401 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:05:23.768321 kernel: raid6: neonx8 gen() 13831 MB/s Dec 13 14:05:23.786313 kernel: raid6: neonx8 xor() 10839 MB/s Dec 13 14:05:23.807312 kernel: raid6: neonx4 gen() 13561 MB/s Dec 13 14:05:23.829317 kernel: raid6: neonx4 xor() 11306 MB/s Dec 13 14:05:23.849312 kernel: raid6: neonx2 gen() 13004 MB/s Dec 13 14:05:23.869314 kernel: raid6: neonx2 xor() 10425 MB/s Dec 13 14:05:23.891312 kernel: raid6: neonx1 gen() 10555 MB/s Dec 13 14:05:23.912311 kernel: raid6: neonx1 xor() 8800 MB/s Dec 13 14:05:23.933311 kernel: raid6: int64x8 gen() 6272 MB/s Dec 13 14:05:23.954311 kernel: raid6: int64x8 xor() 3543 MB/s Dec 13 14:05:23.974311 kernel: raid6: int64x4 gen() 7236 MB/s Dec 13 14:05:23.994311 kernel: raid6: int64x4 xor() 3858 MB/s Dec 13 14:05:24.016312 kernel: raid6: int64x2 gen() 6149 MB/s Dec 13 14:05:24.036311 kernel: raid6: int64x2 xor() 3320 MB/s Dec 13 14:05:24.057311 kernel: raid6: int64x1 gen() 5043 MB/s Dec 13 14:05:24.082150 kernel: raid6: int64x1 xor() 2646 MB/s Dec 13 14:05:24.082160 kernel: raid6: using algorithm neonx8 gen() 13831 MB/s Dec 13 14:05:24.082168 kernel: raid6: .... xor() 10839 MB/s, rmw enabled Dec 13 14:05:24.086559 kernel: raid6: using neon recovery algorithm Dec 13 14:05:24.107678 kernel: xor: measuring software checksum speed Dec 13 14:05:24.107691 kernel: 8regs : 17184 MB/sec Dec 13 14:05:24.113090 kernel: 32regs : 20707 MB/sec Dec 13 14:05:24.116993 kernel: arm64_neon : 27946 MB/sec Dec 13 14:05:24.117003 kernel: xor: using function: arm64_neon (27946 MB/sec) Dec 13 14:05:24.178318 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:05:24.187599 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:05:24.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:24.196000 audit: BPF prog-id=7 op=LOAD Dec 13 14:05:24.196000 audit: BPF prog-id=8 op=LOAD Dec 13 14:05:24.197076 systemd[1]: Starting systemd-udevd.service... Dec 13 14:05:24.212480 systemd-udevd[474]: Using default interface naming scheme 'v252'. Dec 13 14:05:24.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:24.218363 systemd[1]: Started systemd-udevd.service. Dec 13 14:05:24.229340 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:05:24.246720 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Dec 13 14:05:24.273837 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:05:24.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:24.279881 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:05:24.315401 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:05:24.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:24.359325 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 14:05:24.370334 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:05:24.387060 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 14:05:24.387116 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:05:24.392957 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:05:24.406009 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:05:24.406074 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:05:24.422402 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 14:05:24.423328 kernel: scsi host0: storvsc_host_t Dec 13 14:05:24.432333 kernel: scsi host1: storvsc_host_t Dec 13 14:05:24.432504 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:05:24.446321 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:05:24.466755 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:05:24.467820 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:05:24.467834 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:05:24.476031 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:05:24.510779 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:05:24.510884 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:05:24.510961 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:05:24.511036 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:05:24.511131 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:24.511142 kernel: hv_netvsc 000d3af5-17d8-000d-3af5-17d8000d3af5 eth0: VF slot 1 added Dec 13 14:05:24.511230 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:05:24.520653 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:05:24.534558 kernel: hv_pci 39f8ad7b-4223-4f47-9657-afb61c809934: PCI VMBus probing: Using version 0x10004 Dec 13 14:05:24.644392 kernel: hv_pci 39f8ad7b-4223-4f47-9657-afb61c809934: PCI host bridge to bus 4223:00 Dec 13 14:05:24.644540 kernel: pci_bus 4223:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 14:05:24.644634 kernel: pci_bus 4223:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:05:24.644704 kernel: pci 4223:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 14:05:24.644792 kernel: pci 4223:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 14:05:24.644866 kernel: pci 4223:00:02.0: enabling Extended Tags Dec 13 14:05:24.644939 kernel: pci 4223:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4223:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 14:05:24.645012 kernel: pci_bus 4223:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:05:24.645081 kernel: pci 4223:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 14:05:24.683322 kernel: mlx5_core 4223:00:02.0: firmware version: 16.30.1284 Dec 13 14:05:24.901598 kernel: mlx5_core 4223:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Dec 13 14:05:24.901735 kernel: hv_netvsc 000d3af5-17d8-000d-3af5-17d8000d3af5 eth0: VF registering: eth1 Dec 13 14:05:24.901823 kernel: mlx5_core 4223:00:02.0 eth1: joined to eth0 Dec 13 14:05:24.797861 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:05:24.922659 kernel: mlx5_core 4223:00:02.0 enP16931s1: renamed from eth1 Dec 13 14:05:24.922862 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (536) Dec 13 14:05:24.938992 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:05:25.111546 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:05:25.195421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:05:25.202257 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:05:25.217039 systemd[1]: Starting disk-uuid.service... Dec 13 14:05:25.247454 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:25.256330 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:26.266323 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:26.266601 disk-uuid[602]: The operation has completed successfully. Dec 13 14:05:26.327288 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:05:26.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.327396 systemd[1]: Finished disk-uuid.service. Dec 13 14:05:26.336599 systemd[1]: Starting verity-setup.service... Dec 13 14:05:26.377333 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:05:26.728511 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:05:26.735132 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:05:26.746538 systemd[1]: Finished verity-setup.service. Dec 13 14:05:26.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.804221 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:05:26.812890 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:05:26.809079 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:05:26.809892 systemd[1]: Starting ignition-setup.service... Dec 13 14:05:26.818238 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:05:26.860802 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:26.860860 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:26.865809 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:26.906625 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:05:26.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.916000 audit: BPF prog-id=9 op=LOAD Dec 13 14:05:26.917719 systemd[1]: Starting systemd-networkd.service... Dec 13 14:05:26.943609 systemd-networkd[840]: lo: Link UP Dec 13 14:05:26.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.943617 systemd-networkd[840]: lo: Gained carrier Dec 13 14:05:26.985249 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 13 14:05:26.985277 kernel: audit: type=1130 audit(1734098726.950:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.943994 systemd-networkd[840]: Enumeration completed Dec 13 14:05:26.944353 systemd[1]: Started systemd-networkd.service. Dec 13 14:05:26.949842 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:05:27.027614 kernel: audit: type=1130 audit(1734098727.003:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.959465 systemd[1]: Reached target network.target. Dec 13 14:05:26.982255 systemd[1]: Starting iscsiuio.service... Dec 13 14:05:27.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.997360 systemd[1]: Started iscsiuio.service. Dec 13 14:05:27.074598 kernel: audit: type=1130 audit(1734098727.041:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.074624 iscsid[851]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:05:27.074624 iscsid[851]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:05:27.074624 iscsid[851]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:05:27.074624 iscsid[851]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:05:27.074624 iscsid[851]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:05:27.074624 iscsid[851]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:05:27.074624 iscsid[851]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:05:27.186860 kernel: audit: type=1130 audit(1734098727.122:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.027491 systemd[1]: Starting iscsid.service... Dec 13 14:05:27.035786 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:05:27.036131 systemd[1]: Started iscsid.service. Dec 13 14:05:27.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.061277 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:05:27.228879 kernel: audit: type=1130 audit(1734098727.202:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.102606 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:05:27.123491 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:05:27.242733 kernel: mlx5_core 4223:00:02.0 enP16931s1: Link up Dec 13 14:05:27.150138 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:05:27.161891 systemd[1]: Reached target remote-fs.target. Dec 13 14:05:27.176073 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:05:27.197639 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:05:27.288918 kernel: hv_netvsc 000d3af5-17d8-000d-3af5-17d8000d3af5 eth0: Data path switched to VF: enP16931s1 Dec 13 14:05:27.289555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:05:27.289088 systemd-networkd[840]: enP16931s1: Link UP Dec 13 14:05:27.289212 systemd-networkd[840]: eth0: Link UP Dec 13 14:05:27.289375 systemd-networkd[840]: eth0: Gained carrier Dec 13 14:05:27.301498 systemd-networkd[840]: enP16931s1: Gained carrier Dec 13 14:05:27.314369 systemd-networkd[840]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:05:27.367183 systemd[1]: Finished ignition-setup.service. Dec 13 14:05:27.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.397514 kernel: audit: type=1130 audit(1734098727.371:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.392354 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:05:28.737402 systemd-networkd[840]: eth0: Gained IPv6LL Dec 13 14:05:32.549237 ignition[867]: Ignition 2.14.0 Dec 13 14:05:32.549249 ignition[867]: Stage: fetch-offline Dec 13 14:05:32.549327 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.549352 ignition[867]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.648577 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.648728 ignition[867]: parsed url from cmdline: "" Dec 13 14:05:32.648733 ignition[867]: no config URL provided Dec 13 14:05:32.648738 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:05:32.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.661179 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:05:32.700432 kernel: audit: type=1130 audit(1734098732.667:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.648746 ignition[867]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:05:32.668522 systemd[1]: Starting ignition-fetch.service... Dec 13 14:05:32.648751 ignition[867]: failed to fetch config: resource requires networking Dec 13 14:05:32.655218 ignition[867]: Ignition finished successfully Dec 13 14:05:32.687599 ignition[873]: Ignition 2.14.0 Dec 13 14:05:32.687605 ignition[873]: Stage: fetch Dec 13 14:05:32.687716 ignition[873]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.687739 ignition[873]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.690710 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.699573 ignition[873]: parsed url from cmdline: "" Dec 13 14:05:32.699582 ignition[873]: no config URL provided Dec 13 14:05:32.699592 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:05:32.699604 ignition[873]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:05:32.699645 ignition[873]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:05:32.813129 ignition[873]: GET result: OK Dec 13 14:05:32.813215 ignition[873]: config has been read from IMDS userdata Dec 13 14:05:32.817024 unknown[873]: fetched base config from "system" Dec 13 14:05:32.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.813265 ignition[873]: parsing config with SHA512: a267d8d882d018f8725f612dc2033c6d580f9061032f84d67e380f4d1693c66458585579608b418efff93c10a006fafa8c3945dfc64a32110d50460fac1c2f0f Dec 13 14:05:32.817032 unknown[873]: fetched base config from "system" Dec 13 14:05:32.856896 kernel: audit: type=1130 audit(1734098732.828:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.817644 ignition[873]: fetch: fetch complete Dec 13 14:05:32.817037 unknown[873]: fetched user config from "azure" Dec 13 14:05:32.817649 ignition[873]: fetch: fetch passed Dec 13 14:05:32.824009 systemd[1]: Finished ignition-fetch.service. Dec 13 14:05:32.817705 ignition[873]: Ignition finished successfully Dec 13 14:05:32.851002 systemd[1]: Starting ignition-kargs.service... Dec 13 14:05:32.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.865581 ignition[879]: Ignition 2.14.0 Dec 13 14:05:32.879462 systemd[1]: Finished ignition-kargs.service. Dec 13 14:05:32.865587 ignition[879]: Stage: kargs Dec 13 14:05:32.955364 kernel: audit: type=1130 audit(1734098732.887:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.955388 kernel: audit: type=1130 audit(1734098732.931:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.889323 systemd[1]: Starting ignition-disks.service... Dec 13 14:05:32.865700 ignition[879]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.923219 systemd[1]: Finished ignition-disks.service. Dec 13 14:05:32.865718 ignition[879]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.931485 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:05:32.874044 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.957375 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:05:32.875656 ignition[879]: kargs: kargs passed Dec 13 14:05:32.968763 systemd[1]: Reached target local-fs.target. Dec 13 14:05:32.875710 ignition[879]: Ignition finished successfully Dec 13 14:05:32.981711 systemd[1]: Reached target sysinit.target. Dec 13 14:05:32.899662 ignition[885]: Ignition 2.14.0 Dec 13 14:05:32.991946 systemd[1]: Reached target basic.target. Dec 13 14:05:32.899668 ignition[885]: Stage: disks Dec 13 14:05:33.003073 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:05:32.899776 ignition[885]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.899793 ignition[885]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.902410 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.922210 ignition[885]: disks: disks passed Dec 13 14:05:33.068906 systemd-fsck[893]: ROOT: clean, 621/7326000 files, 481076/7359488 blocks Dec 13 14:05:33.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.068826 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:05:33.107524 kernel: audit: type=1130 audit(1734098733.073:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.922279 ignition[885]: Ignition finished successfully Dec 13 14:05:33.075190 systemd[1]: Mounting sysroot.mount... Dec 13 14:05:33.129346 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:05:33.129677 systemd[1]: Mounted sysroot.mount. Dec 13 14:05:33.137206 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:05:33.215396 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:05:33.220419 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:05:33.233347 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:05:33.233395 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:05:33.249545 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:05:33.307124 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:05:33.312755 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:05:33.335447 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (904) Dec 13 14:05:33.347582 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:33.347631 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:33.352297 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:33.352868 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:05:33.364591 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:05:33.389091 initrd-setup-root[935]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:05:33.422970 initrd-setup-root[943]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:05:33.434067 initrd-setup-root[951]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:05:34.308651 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:05:34.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.333784 systemd[1]: Starting ignition-mount.service... Dec 13 14:05:34.344264 kernel: audit: type=1130 audit(1734098734.313:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.343485 systemd[1]: Starting sysroot-boot.service... Dec 13 14:05:34.353362 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:05:34.353517 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:05:34.373862 systemd[1]: Finished sysroot-boot.service. Dec 13 14:05:34.398626 kernel: audit: type=1130 audit(1734098734.378:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.405684 ignition[973]: INFO : Ignition 2.14.0 Dec 13 14:05:34.405684 ignition[973]: INFO : Stage: mount Dec 13 14:05:34.417377 ignition[973]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:34.417377 ignition[973]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:34.417377 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:34.417377 ignition[973]: INFO : mount: mount passed Dec 13 14:05:34.417377 ignition[973]: INFO : Ignition finished successfully Dec 13 14:05:34.472581 kernel: audit: type=1130 audit(1734098734.419:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.414951 systemd[1]: Finished ignition-mount.service. Dec 13 14:05:36.045750 coreos-metadata[903]: Dec 13 14:05:36.045 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:05:36.055570 coreos-metadata[903]: Dec 13 14:05:36.055 INFO Fetch successful Dec 13 14:05:36.092702 coreos-metadata[903]: Dec 13 14:05:36.092 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:05:36.116865 coreos-metadata[903]: Dec 13 14:05:36.116 INFO Fetch successful Dec 13 14:05:36.132672 coreos-metadata[903]: Dec 13 14:05:36.132 INFO wrote hostname ci-3510.3.6-a-fa37d69d59 to /sysroot/etc/hostname Dec 13 14:05:36.134890 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:05:36.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.170591 systemd[1]: Starting ignition-files.service... Dec 13 14:05:36.180683 kernel: audit: type=1130 audit(1734098736.147:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.181558 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:05:36.210271 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (983) Dec 13 14:05:36.210356 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:36.210375 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:36.222685 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:36.227326 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:05:36.241244 ignition[1002]: INFO : Ignition 2.14.0 Dec 13 14:05:36.241244 ignition[1002]: INFO : Stage: files Dec 13 14:05:36.251875 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:36.251875 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:36.251875 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:36.251875 ignition[1002]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:05:36.251875 ignition[1002]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:05:36.251875 ignition[1002]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:05:36.413165 ignition[1002]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:05:36.421189 ignition[1002]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:05:36.429600 ignition[1002]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:05:36.428933 unknown[1002]: wrote ssh authorized keys file for user: core Dec 13 14:05:36.445063 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:05:36.456350 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:05:36.564207 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:05:36.693132 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:05:36.705078 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:05:36.705078 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 14:05:37.236918 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:05:37.309104 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:05:37.328281 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3249898726" Dec 13 14:05:37.506808 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1002) Dec 13 14:05:37.343944 systemd[1]: mnt-oem3249898726.mount: Deactivated successfully. Dec 13 14:05:37.512369 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3249898726": device or resource busy Dec 13 14:05:37.512369 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3249898726", trying btrfs: device or resource busy Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3249898726" Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3249898726" Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3249898726" Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3249898726" Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem903652849" Dec 13 14:05:37.512369 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem903652849": device or resource busy Dec 13 14:05:37.512369 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem903652849", trying btrfs: device or resource busy Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem903652849" Dec 13 14:05:37.512369 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem903652849" Dec 13 14:05:37.380413 systemd[1]: mnt-oem903652849.mount: Deactivated successfully. Dec 13 14:05:37.687176 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem903652849" Dec 13 14:05:37.687176 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem903652849" Dec 13 14:05:37.687176 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:05:37.687176 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:05:37.687176 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 14:05:37.810974 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Dec 13 14:05:38.020633 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 14:05:38.020633 ignition[1002]: INFO : files: op(14): [started] processing unit "waagent.service" Dec 13 14:05:38.020633 ignition[1002]: INFO : files: op(14): [finished] processing unit "waagent.service" Dec 13 14:05:38.020633 ignition[1002]: INFO : files: op(15): [started] processing unit "nvidia.service" Dec 13 14:05:38.020633 ignition[1002]: INFO : files: op(15): [finished] processing unit "nvidia.service" Dec 13 14:05:38.110453 kernel: audit: type=1130 audit(1734098738.045:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.110490 kernel: audit: type=1130 audit(1734098738.089:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:05:38.110575 ignition[1002]: INFO : files: files passed Dec 13 14:05:38.110575 ignition[1002]: INFO : Ignition finished successfully Dec 13 14:05:38.378214 kernel: audit: type=1131 audit(1734098738.111:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.378243 kernel: audit: type=1130 audit(1734098738.142:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.378255 kernel: audit: type=1130 audit(1734098738.216:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.378264 kernel: audit: type=1131 audit(1734098738.216:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.378279 kernel: audit: type=1130 audit(1734098738.320:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.034532 systemd[1]: Finished ignition-files.service. Dec 13 14:05:38.048674 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:05:38.422343 kernel: audit: type=1131 audit(1734098738.390:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.422414 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:05:38.073342 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:05:38.074180 systemd[1]: Starting ignition-quench.service... Dec 13 14:05:38.080009 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:05:38.080184 systemd[1]: Finished ignition-quench.service. Dec 13 14:05:38.112354 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:05:38.142888 systemd[1]: Reached target ignition-complete.target. Dec 13 14:05:38.177274 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:05:38.211799 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:05:38.211902 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:05:38.549207 kernel: audit: type=1131 audit(1734098738.525:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.217407 systemd[1]: Reached target initrd-fs.target. Dec 13 14:05:38.232772 systemd[1]: Reached target initrd.target. Dec 13 14:05:38.582918 kernel: audit: type=1131 audit(1734098738.553:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.273776 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:05:38.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.274695 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:05:38.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.310853 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:05:38.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.340387 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:05:38.356579 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:05:38.363334 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:05:38.631433 ignition[1040]: INFO : Ignition 2.14.0 Dec 13 14:05:38.631433 ignition[1040]: INFO : Stage: umount Dec 13 14:05:38.631433 ignition[1040]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:38.631433 ignition[1040]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:38.631433 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:38.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.373512 systemd[1]: Stopped target timers.target. Dec 13 14:05:38.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.703094 ignition[1040]: INFO : umount: umount passed Dec 13 14:05:38.703094 ignition[1040]: INFO : Ignition finished successfully Dec 13 14:05:38.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.382569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:05:38.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.382639 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:05:38.390896 systemd[1]: Stopped target initrd.target. Dec 13 14:05:38.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.409776 systemd[1]: Stopped target basic.target. Dec 13 14:05:38.417163 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:05:38.427044 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:05:38.440010 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:05:38.456257 systemd[1]: Stopped target remote-fs.target. Dec 13 14:05:38.467108 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:05:38.477224 systemd[1]: Stopped target sysinit.target. Dec 13 14:05:38.488461 systemd[1]: Stopped target local-fs.target. Dec 13 14:05:38.497789 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:05:38.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.506516 systemd[1]: Stopped target swap.target. Dec 13 14:05:38.515376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:05:38.515439 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:05:38.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.525746 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:05:38.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.549555 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:05:38.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.549617 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:05:38.554108 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:05:38.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.554164 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:05:38.583440 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:05:38.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.583495 systemd[1]: Stopped ignition-files.service. Dec 13 14:05:38.916000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:05:38.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.594641 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:05:38.594688 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:05:38.608108 systemd[1]: Stopping ignition-mount.service... Dec 13 14:05:38.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.623560 systemd[1]: Stopping iscsiuio.service... Dec 13 14:05:38.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.631666 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:05:38.984455 kernel: hv_netvsc 000d3af5-17d8-000d-3af5-17d8000d3af5 eth0: Data path switched from VF: enP16931s1 Dec 13 14:05:38.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.631750 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:05:38.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.641256 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:05:39.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:39.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.648639 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:05:38.648707 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:05:38.653586 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:05:38.653623 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:05:39.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.658976 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:05:38.659113 systemd[1]: Stopped iscsiuio.service. Dec 13 14:05:38.669740 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:05:38.669834 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:05:39.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.687838 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:05:38.687927 systemd[1]: Stopped ignition-mount.service. Dec 13 14:05:39.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.699002 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:05:38.699057 systemd[1]: Stopped ignition-disks.service. Dec 13 14:05:38.707211 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:05:38.707254 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:05:38.716761 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:05:38.716801 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:05:38.726602 systemd[1]: Stopped target network.target. Dec 13 14:05:38.738749 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:05:38.738811 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:05:38.748105 systemd[1]: Stopped target paths.target. Dec 13 14:05:39.136621 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Dec 13 14:05:39.136659 iscsid[851]: iscsid shutting down. Dec 13 14:05:38.755843 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:05:38.764336 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:05:38.773550 systemd[1]: Stopped target slices.target. Dec 13 14:05:38.782356 systemd[1]: Stopped target sockets.target. Dec 13 14:05:38.790376 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:05:38.790406 systemd[1]: Closed iscsid.socket. Dec 13 14:05:38.798272 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:05:38.798332 systemd[1]: Closed iscsiuio.socket. Dec 13 14:05:38.805719 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:05:38.805760 systemd[1]: Stopped ignition-setup.service. Dec 13 14:05:38.814902 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:05:38.823697 systemd-networkd[840]: eth0: DHCPv6 lease lost Dec 13 14:05:39.136000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:05:38.825448 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:05:38.834265 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:05:38.834384 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:05:38.844564 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:05:38.844595 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:05:38.849795 systemd[1]: Stopping network-cleanup.service... Dec 13 14:05:38.857608 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:05:38.857674 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:05:38.862588 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:05:38.862640 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:05:38.876530 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:05:38.876580 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:05:38.885103 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:05:38.896429 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:05:38.896908 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:05:38.897011 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:05:38.911931 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:05:38.912069 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:05:38.917110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:05:38.917154 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:05:38.926951 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:05:38.927008 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:05:38.938070 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:05:38.938124 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:05:38.948091 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:05:38.948136 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:05:38.956792 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:05:38.956832 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:05:38.973964 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:05:38.983385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:05:38.983440 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:05:38.992738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:05:38.992866 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:05:39.022406 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:05:39.022513 systemd[1]: Stopped network-cleanup.service. Dec 13 14:05:39.028561 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:05:39.046976 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:05:39.047078 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:05:39.051332 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:05:39.059779 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:05:39.059833 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:05:39.070828 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:05:39.089783 systemd[1]: Switching root. Dec 13 14:05:39.138105 systemd-journald[276]: Journal stopped Dec 13 14:06:01.484845 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:06:01.484866 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:06:01.484876 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:06:01.484886 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:06:01.484894 kernel: SELinux: policy capability open_perms=1 Dec 13 14:06:01.484902 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:06:01.484911 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:06:01.484919 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:06:01.484928 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:06:01.484936 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:06:01.484944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:06:01.484956 systemd[1]: Successfully loaded SELinux policy in 438.709ms. Dec 13 14:06:01.484966 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 42.275ms. Dec 13 14:06:01.484976 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:06:01.484986 systemd[1]: Detected virtualization microsoft. Dec 13 14:06:01.484996 systemd[1]: Detected architecture arm64. Dec 13 14:06:01.485004 systemd[1]: Detected first boot. Dec 13 14:06:01.485014 systemd[1]: Hostname set to . Dec 13 14:06:01.485022 systemd[1]: Initializing machine ID from random generator. Dec 13 14:06:01.485031 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:06:01.485039 kernel: kauditd_printk_skb: 39 callbacks suppressed Dec 13 14:06:01.485049 kernel: audit: type=1400 audit(1734098745.716:87): avc: denied { associate } for pid=1073 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:06:01.485060 kernel: audit: type=1300 audit(1734098745.716:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1056 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:01.485070 kernel: audit: type=1327 audit(1734098745.716:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:06:01.485080 kernel: audit: type=1400 audit(1734098745.725:88): avc: denied { associate } for pid=1073 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:06:01.485089 kernel: audit: type=1300 audit(1734098745.725:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145975 a2=1ed a3=0 items=2 ppid=1056 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:01.485098 kernel: audit: type=1307 audit(1734098745.725:88): cwd="/" Dec 13 14:06:01.485108 kernel: audit: type=1302 audit(1734098745.725:88): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:01.485117 kernel: audit: type=1302 audit(1734098745.725:88): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:01.485128 kernel: audit: type=1327 audit(1734098745.725:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:06:01.485137 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:06:01.485147 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:01.485156 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:01.485167 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:01.485177 kernel: audit: type=1334 audit(1734098760.800:89): prog-id=12 op=LOAD Dec 13 14:06:01.485185 kernel: audit: type=1334 audit(1734098760.800:90): prog-id=3 op=UNLOAD Dec 13 14:06:01.485194 kernel: audit: type=1334 audit(1734098760.807:91): prog-id=13 op=LOAD Dec 13 14:06:01.485202 kernel: audit: type=1334 audit(1734098760.813:92): prog-id=14 op=LOAD Dec 13 14:06:01.485211 kernel: audit: type=1334 audit(1734098760.813:93): prog-id=4 op=UNLOAD Dec 13 14:06:01.485220 kernel: audit: type=1334 audit(1734098760.813:94): prog-id=5 op=UNLOAD Dec 13 14:06:01.485231 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:06:01.485240 kernel: audit: type=1334 audit(1734098760.819:95): prog-id=15 op=LOAD Dec 13 14:06:01.485249 systemd[1]: Stopped iscsid.service. Dec 13 14:06:01.485260 kernel: audit: type=1334 audit(1734098760.819:96): prog-id=12 op=UNLOAD Dec 13 14:06:01.485269 kernel: audit: type=1334 audit(1734098760.825:97): prog-id=16 op=LOAD Dec 13 14:06:01.485277 kernel: audit: type=1334 audit(1734098760.831:98): prog-id=17 op=LOAD Dec 13 14:06:01.485286 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:06:01.485295 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:06:01.485363 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:06:01.485376 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:06:01.485388 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:06:01.485398 systemd[1]: Created slice system-getty.slice. Dec 13 14:06:01.485408 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:06:01.485418 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:06:01.485428 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:06:01.485439 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:06:01.485448 systemd[1]: Created slice user.slice. Dec 13 14:06:01.485457 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:06:01.485467 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:06:01.485478 systemd[1]: Set up automount boot.automount. Dec 13 14:06:01.485487 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:06:01.485496 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:06:01.485506 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:06:01.485636 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:06:01.485677 systemd[1]: Reached target integritysetup.target. Dec 13 14:06:01.485688 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:06:01.485700 systemd[1]: Reached target remote-fs.target. Dec 13 14:06:01.485709 systemd[1]: Reached target slices.target. Dec 13 14:06:01.485719 systemd[1]: Reached target swap.target. Dec 13 14:06:01.485728 systemd[1]: Reached target torcx.target. Dec 13 14:06:01.485738 systemd[1]: Reached target veritysetup.target. Dec 13 14:06:01.485747 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:06:01.485758 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:06:01.485768 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:06:01.485781 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:06:01.485791 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:06:01.485800 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:06:01.485810 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:06:01.485819 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:06:01.485828 systemd[1]: Mounting media.mount... Dec 13 14:06:01.485839 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:06:01.485849 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:06:01.485858 systemd[1]: Mounting tmp.mount... Dec 13 14:06:01.485867 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:06:01.485877 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:01.485887 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:06:01.485896 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:06:01.485905 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:01.485914 systemd[1]: Starting modprobe@drm.service... Dec 13 14:06:01.485925 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:01.485935 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:06:01.485944 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:01.485954 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:06:01.485964 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:06:01.485973 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:06:01.485983 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:06:01.485993 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:06:01.486002 systemd[1]: Stopped systemd-journald.service. Dec 13 14:06:01.486013 systemd[1]: systemd-journald.service: Consumed 3.118s CPU time. Dec 13 14:06:01.486022 systemd[1]: Starting systemd-journald.service... Dec 13 14:06:01.486031 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:06:01.486041 kernel: fuse: init (API version 7.34) Dec 13 14:06:01.486050 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:06:01.486059 kernel: loop: module loaded Dec 13 14:06:01.486068 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:06:01.486082 systemd-journald[1151]: Journal started Dec 13 14:06:01.486131 systemd-journald[1151]: Runtime Journal (/run/log/journal/975c318d3144458fa25fa3f44b2e80de) is 8.0M, max 78.5M, 70.5M free. Dec 13 14:05:40.892000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:05:42.315000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:05:42.315000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:05:42.315000 audit: BPF prog-id=10 op=LOAD Dec 13 14:05:42.315000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:05:42.315000 audit: BPF prog-id=11 op=LOAD Dec 13 14:05:42.315000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:05:45.716000 audit[1073]: AVC avc: denied { associate } for pid=1073 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:05:45.716000 audit[1073]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1056 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:45.716000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:05:45.725000 audit[1073]: AVC avc: denied { associate } for pid=1073 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:05:45.725000 audit[1073]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145975 a2=1ed a3=0 items=2 ppid=1056 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:45.725000 audit: CWD cwd="/" Dec 13 14:05:45.725000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:05:45.725000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:05:45.725000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:06:00.800000 audit: BPF prog-id=12 op=LOAD Dec 13 14:06:00.800000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:06:00.807000 audit: BPF prog-id=13 op=LOAD Dec 13 14:06:00.813000 audit: BPF prog-id=14 op=LOAD Dec 13 14:06:00.813000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:06:00.813000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:06:00.819000 audit: BPF prog-id=15 op=LOAD Dec 13 14:06:00.819000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:06:00.825000 audit: BPF prog-id=16 op=LOAD Dec 13 14:06:00.831000 audit: BPF prog-id=17 op=LOAD Dec 13 14:06:00.831000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:06:00.831000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:06:00.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.872000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:06:00.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.407000 audit: BPF prog-id=18 op=LOAD Dec 13 14:06:01.407000 audit: BPF prog-id=19 op=LOAD Dec 13 14:06:01.407000 audit: BPF prog-id=20 op=LOAD Dec 13 14:06:01.407000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:06:01.407000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:06:01.472000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:06:01.472000 audit[1151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe1723db0 a2=4000 a3=1 items=0 ppid=1 pid=1151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:01.472000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:06:00.798961 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:05:45.686826 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:00.832538 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:05:45.687348 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:06:00.832971 systemd[1]: systemd-journald.service: Consumed 3.118s CPU time. Dec 13 14:05:45.687365 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:05:45.687400 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:05:45.687410 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:05:45.687441 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:05:45.687453 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:05:45.687654 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:05:45.687686 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:05:45.687698 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:05:45.688002 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:05:45.688034 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:05:45.688052 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:05:45.688066 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:05:45.688084 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:05:45.688097 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:05:59.280993 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:05:59.281254 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:05:59.281383 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:05:59.281541 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:05:59.281590 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:05:59.281643 /usr/lib/systemd/system-generators/torcx-generator[1073]: time="2024-12-13T14:05:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:06:01.506808 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:06:01.518522 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:06:01.518585 systemd[1]: Stopped verity-setup.service. Dec 13 14:06:01.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.534917 systemd[1]: Started systemd-journald.service. Dec 13 14:06:01.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.535912 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:06:01.541597 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:06:01.545884 systemd[1]: Mounted media.mount. Dec 13 14:06:01.550038 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:06:01.554894 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:06:01.559959 systemd[1]: Mounted tmp.mount. Dec 13 14:06:01.564187 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:06:01.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.569734 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:06:01.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.575281 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:06:01.575539 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:06:01.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.580737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:01.580935 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:01.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.586296 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:06:01.586511 systemd[1]: Finished modprobe@drm.service. Dec 13 14:06:01.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.591953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:01.592154 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:01.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.597423 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:06:01.597623 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:06:01.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.603068 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:01.603259 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:01.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.608707 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:06:01.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.614298 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:06:01.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.620290 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:06:01.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.625884 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:06:01.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.632120 systemd[1]: Reached target network-pre.target. Dec 13 14:06:01.638375 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:06:01.645503 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:06:01.650342 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:06:01.652323 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:06:01.658034 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:06:01.662915 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:01.664144 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:06:01.669801 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:01.670963 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:06:01.676609 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:06:01.684326 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:06:01.691467 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:06:01.691662 systemd-journald[1151]: Time spent on flushing to /var/log/journal/975c318d3144458fa25fa3f44b2e80de is 14.538ms for 1104 entries. Dec 13 14:06:01.691662 systemd-journald[1151]: System Journal (/var/log/journal/975c318d3144458fa25fa3f44b2e80de) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:06:01.770629 systemd-journald[1151]: Received client request to flush runtime journal. Dec 13 14:06:01.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.770902 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:06:01.703139 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:06:01.719913 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:06:01.725448 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:06:01.734921 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:06:01.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.771678 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:06:01.853407 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:06:01.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.160038 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:06:02.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.166000 audit: BPF prog-id=21 op=LOAD Dec 13 14:06:02.166000 audit: BPF prog-id=22 op=LOAD Dec 13 14:06:02.166000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:06:02.166000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:06:02.167079 systemd[1]: Starting systemd-udevd.service... Dec 13 14:06:02.186193 systemd-udevd[1196]: Using default interface naming scheme 'v252'. Dec 13 14:06:02.234872 systemd[1]: Started systemd-udevd.service. Dec 13 14:06:02.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.245000 audit: BPF prog-id=23 op=LOAD Dec 13 14:06:02.246357 systemd[1]: Starting systemd-networkd.service... Dec 13 14:06:02.269411 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:06:02.268000 audit: BPF prog-id=24 op=LOAD Dec 13 14:06:02.268000 audit: BPF prog-id=25 op=LOAD Dec 13 14:06:02.268000 audit: BPF prog-id=26 op=LOAD Dec 13 14:06:02.293095 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Dec 13 14:06:02.312614 systemd[1]: Started systemd-userdbd.service. Dec 13 14:06:02.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.328430 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:06:02.363000 audit[1197]: AVC avc: denied { confidentiality } for pid=1197 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:06:02.411516 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:06:02.411637 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:06:02.411661 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:06:02.427226 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:06:02.427736 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:06:02.427819 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:06:02.433340 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 14:06:02.451629 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:06:02.451757 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:06:02.460641 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:06:02.363000 audit[1197]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaac8d5c7b0 a1=aa2c a2=ffff8fed24b0 a3=aaaac8cbd010 items=12 ppid=1196 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:02.363000 audit: CWD cwd="/" Dec 13 14:06:02.363000 audit: PATH item=0 name=(null) inode=6818 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=1 name=(null) inode=10666 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=2 name=(null) inode=10666 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=3 name=(null) inode=10667 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=4 name=(null) inode=10666 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=5 name=(null) inode=10668 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=6 name=(null) inode=10666 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=7 name=(null) inode=10669 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=8 name=(null) inode=10666 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=9 name=(null) inode=10670 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=10 name=(null) inode=10666 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PATH item=11 name=(null) inode=10671 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:02.363000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:06:02.480844 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:06:02.480956 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:06:02.481011 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:06:02.088027 systemd-networkd[1217]: lo: Link UP Dec 13 14:06:02.180686 systemd-journald[1151]: Time jumped backwards, rotating. Dec 13 14:06:02.180819 kernel: mlx5_core 4223:00:02.0 enP16931s1: Link up Dec 13 14:06:02.181008 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1216) Dec 13 14:06:02.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.088040 systemd-networkd[1217]: lo: Gained carrier Dec 13 14:06:02.088474 systemd-networkd[1217]: Enumeration completed Dec 13 14:06:02.088621 systemd[1]: Started systemd-networkd.service. Dec 13 14:06:02.102191 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:06:02.108034 systemd-networkd[1217]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:02.192781 kernel: hv_netvsc 000d3af5-17d8-000d-3af5-17d8000d3af5 eth0: Data path switched to VF: enP16931s1 Dec 13 14:06:02.193436 systemd-networkd[1217]: enP16931s1: Link UP Dec 13 14:06:02.193650 systemd-networkd[1217]: eth0: Link UP Dec 13 14:06:02.193707 systemd-networkd[1217]: eth0: Gained carrier Dec 13 14:06:02.196353 systemd-networkd[1217]: enP16931s1: Gained carrier Dec 13 14:06:02.208710 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:06:02.218053 systemd-networkd[1217]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:06:02.219047 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:06:02.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.225881 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:06:02.296718 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:06:02.325787 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:06:02.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.331243 systemd[1]: Reached target cryptsetup.target. Dec 13 14:06:02.337391 systemd[1]: Starting lvm2-activation.service... Dec 13 14:06:02.341413 lvm[1276]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:06:02.363729 systemd[1]: Finished lvm2-activation.service. Dec 13 14:06:02.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.409056 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:06:02.415675 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:06:02.415707 systemd[1]: Reached target local-fs.target. Dec 13 14:06:02.420504 systemd[1]: Reached target machines.target. Dec 13 14:06:02.426333 systemd[1]: Starting ldconfig.service... Dec 13 14:06:02.436110 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:02.436182 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:02.437471 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:06:02.443396 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:06:02.450993 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:06:02.457714 systemd[1]: Starting systemd-sysext.service... Dec 13 14:06:02.462849 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1278 (bootctl) Dec 13 14:06:02.464097 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:06:02.488576 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:06:02.497339 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:06:02.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.506912 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:06:02.507140 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:06:02.517556 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:06:02.518306 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:06:02.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.541790 kernel: loop0: detected capacity change from 0 to 194096 Dec 13 14:06:02.572508 systemd-fsck[1286]: fsck.fat 4.2 (2021-01-31) Dec 13 14:06:02.572508 systemd-fsck[1286]: /dev/sda1: 236 files, 117175/258078 clusters Dec 13 14:06:02.574083 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:06:02.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.582885 systemd[1]: Mounting boot.mount... Dec 13 14:06:02.591680 systemd[1]: Mounted boot.mount. Dec 13 14:06:02.597095 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:06:02.606456 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:06:02.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.626785 kernel: loop1: detected capacity change from 0 to 194096 Dec 13 14:06:02.632373 (sd-sysext)[1294]: Using extensions 'kubernetes'. Dec 13 14:06:02.633067 (sd-sysext)[1294]: Merged extensions into '/usr'. Dec 13 14:06:02.650893 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:06:02.655722 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:02.657175 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:02.663580 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:02.669702 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:02.674328 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:02.674489 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:02.677015 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:06:02.682683 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:02.682859 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:02.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.689551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:02.689684 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:02.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.695837 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:02.696051 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:02.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.701835 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:02.701939 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:02.703066 systemd[1]: Finished systemd-sysext.service. Dec 13 14:06:02.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.709714 systemd[1]: Starting ensure-sysext.service... Dec 13 14:06:02.715489 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:06:02.722457 systemd[1]: Reloading. Dec 13 14:06:02.773125 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:06:02.777136 /usr/lib/systemd/system-generators/torcx-generator[1320]: time="2024-12-13T14:06:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:02.777168 /usr/lib/systemd/system-generators/torcx-generator[1320]: time="2024-12-13T14:06:02Z" level=info msg="torcx already run" Dec 13 14:06:02.801785 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:06:02.824751 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:06:02.866383 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:02.866403 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:02.882279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:02.944000 audit: BPF prog-id=27 op=LOAD Dec 13 14:06:02.944000 audit: BPF prog-id=28 op=LOAD Dec 13 14:06:02.944000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:06:02.944000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:06:02.945000 audit: BPF prog-id=29 op=LOAD Dec 13 14:06:02.945000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:06:02.946000 audit: BPF prog-id=30 op=LOAD Dec 13 14:06:02.946000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:06:02.946000 audit: BPF prog-id=31 op=LOAD Dec 13 14:06:02.946000 audit: BPF prog-id=32 op=LOAD Dec 13 14:06:02.946000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:06:02.947000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:06:02.948000 audit: BPF prog-id=33 op=LOAD Dec 13 14:06:02.948000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:06:02.948000 audit: BPF prog-id=34 op=LOAD Dec 13 14:06:02.948000 audit: BPF prog-id=35 op=LOAD Dec 13 14:06:02.948000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:06:02.948000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:06:02.964148 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:02.966340 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:02.972222 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:02.978806 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:02.983102 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:02.983238 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:02.984040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:02.984350 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:02.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.989408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:02.989523 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:02.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.995225 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:02.995433 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:02.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.001899 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:03.003263 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:03.009280 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:03.015042 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:03.018910 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:03.019036 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:03.019837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:03.019972 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:03.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.024830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:03.024956 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:03.030551 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:03.030655 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:03.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.035723 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:06:03.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.045004 systemd[1]: Starting audit-rules.service... Dec 13 14:06:03.050047 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:06:03.054898 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:03.056261 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:03.063057 systemd[1]: Starting modprobe@drm.service... Dec 13 14:06:03.069117 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:03.076287 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:03.081164 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:03.081305 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:03.082612 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:06:03.088000 audit: BPF prog-id=36 op=LOAD Dec 13 14:06:03.090038 systemd[1]: Starting systemd-resolved.service... Dec 13 14:06:03.094000 audit: BPF prog-id=37 op=LOAD Dec 13 14:06:03.096159 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:06:03.101947 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:06:03.108013 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:06:03.107000 audit[1404]: SYSTEM_BOOT pid=1404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.113685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:03.113824 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:03.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.119081 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:06:03.119199 systemd[1]: Finished modprobe@drm.service. Dec 13 14:06:03.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.124497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:03.124609 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:03.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.130063 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:03.130180 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:03.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.135347 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:06:03.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.143167 systemd[1]: Finished ensure-sysext.service. Dec 13 14:06:03.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.149132 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:03.149229 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:03.149289 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:03.153418 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:06:03.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.187000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:06:03.187000 audit[1412]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc52ef060 a2=420 a3=0 items=0 ppid=1388 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:03.187000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:06:03.191121 augenrules[1412]: No rules Dec 13 14:06:03.191986 systemd[1]: Finished audit-rules.service. Dec 13 14:06:03.207714 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:06:03.212559 systemd[1]: Reached target time-set.target. Dec 13 14:06:03.237266 systemd-resolved[1401]: Positive Trust Anchors: Dec 13 14:06:03.237279 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:06:03.237305 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:06:03.240548 systemd-resolved[1401]: Using system hostname 'ci-3510.3.6-a-fa37d69d59'. Dec 13 14:06:03.242018 systemd[1]: Started systemd-resolved.service. Dec 13 14:06:03.246940 systemd[1]: Reached target network.target. Dec 13 14:06:03.251542 systemd[1]: Reached target nss-lookup.target. Dec 13 14:06:03.404665 systemd-timesyncd[1402]: Contacted time server 144.202.66.214:123 (1.flatcar.pool.ntp.org). Dec 13 14:06:03.404773 systemd-timesyncd[1402]: Initial clock synchronization to Fri 2024-12-13 14:06:03.397651 UTC. Dec 13 14:06:04.044884 systemd-networkd[1217]: eth0: Gained IPv6LL Dec 13 14:06:04.047042 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:06:04.053505 systemd[1]: Reached target network-online.target. Dec 13 14:06:14.661812 ldconfig[1277]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:06:14.960458 systemd[1]: Finished ldconfig.service. Dec 13 14:06:14.967057 systemd[1]: Starting systemd-update-done.service... Dec 13 14:06:14.981870 systemd[1]: Finished systemd-update-done.service. Dec 13 14:06:14.987206 systemd[1]: Reached target sysinit.target. Dec 13 14:06:14.991631 systemd[1]: Started motdgen.path. Dec 13 14:06:14.995657 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:06:15.002169 systemd[1]: Started logrotate.timer. Dec 13 14:06:15.006162 systemd[1]: Started mdadm.timer. Dec 13 14:06:15.009934 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:06:15.015134 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:06:15.015164 systemd[1]: Reached target paths.target. Dec 13 14:06:15.019807 systemd[1]: Reached target timers.target. Dec 13 14:06:15.024900 systemd[1]: Listening on dbus.socket. Dec 13 14:06:15.030276 systemd[1]: Starting docker.socket... Dec 13 14:06:15.036846 systemd[1]: Listening on sshd.socket. Dec 13 14:06:15.041200 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:15.041725 systemd[1]: Listening on docker.socket. Dec 13 14:06:15.046184 systemd[1]: Reached target sockets.target. Dec 13 14:06:15.050880 systemd[1]: Reached target basic.target. Dec 13 14:06:15.055289 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:06:15.055319 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:06:15.056463 systemd[1]: Starting containerd.service... Dec 13 14:06:15.062250 systemd[1]: Starting dbus.service... Dec 13 14:06:15.067070 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:06:15.073128 systemd[1]: Starting extend-filesystems.service... Dec 13 14:06:15.078128 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:06:15.079571 systemd[1]: Starting kubelet.service... Dec 13 14:06:15.084572 systemd[1]: Starting motdgen.service... Dec 13 14:06:15.089551 systemd[1]: Started nvidia.service. Dec 13 14:06:15.095181 systemd[1]: Starting prepare-helm.service... Dec 13 14:06:15.100292 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:06:15.106154 systemd[1]: Starting sshd-keygen.service... Dec 13 14:06:15.112218 systemd[1]: Starting systemd-logind.service... Dec 13 14:06:15.118101 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:15.118181 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:06:15.118664 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:06:15.119640 systemd[1]: Starting update-engine.service... Dec 13 14:06:15.125744 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:06:15.137238 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:06:15.137419 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:06:15.364421 jq[1439]: true Dec 13 14:06:15.367583 jq[1423]: false Dec 13 14:06:15.719218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:06:15.719397 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:06:15.724782 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:06:15.724957 systemd[1]: Finished motdgen.service. Dec 13 14:06:15.763723 jq[1447]: true Dec 13 14:06:15.772074 extend-filesystems[1424]: Found loop1 Dec 13 14:06:15.777211 extend-filesystems[1424]: Found sda Dec 13 14:06:15.777211 extend-filesystems[1424]: Found sda1 Dec 13 14:06:15.777211 extend-filesystems[1424]: Found sda2 Dec 13 14:06:15.777211 extend-filesystems[1424]: Found sda3 Dec 13 14:06:15.777211 extend-filesystems[1424]: Found usr Dec 13 14:06:15.777211 extend-filesystems[1424]: Found sda4 Dec 13 14:06:15.777211 extend-filesystems[1424]: Found sda6 Dec 13 14:06:15.777211 extend-filesystems[1424]: Found sda7 Dec 13 14:06:15.777211 extend-filesystems[1424]: Found sda9 Dec 13 14:06:15.777211 extend-filesystems[1424]: Checking size of /dev/sda9 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.808254456Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.831274994Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.831429282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.832986473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.833017746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.833256056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.833273452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.833289209Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.833300527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.833515281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:15.837265 env[1446]: time="2024-12-13T14:06:15.833722997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:15.796103 systemd[1]: Started kubelet.service. Dec 13 14:06:15.839390 env[1446]: time="2024-12-13T14:06:15.833870526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:15.839390 env[1446]: time="2024-12-13T14:06:15.833886363Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:06:15.839390 env[1446]: time="2024-12-13T14:06:15.833936952Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:06:15.839390 env[1446]: time="2024-12-13T14:06:15.833949270Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:06:15.856932 systemd-logind[1434]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 14:06:15.857444 systemd-logind[1434]: New seat seat0. Dec 13 14:06:16.206446 tar[1442]: linux-arm64/helm Dec 13 14:06:16.400292 kubelet[1476]: E1213 14:06:16.400237 1476 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:16.402119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:16.402248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610023499Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610073369Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610089126Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610129958Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610146235Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610160272Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610174269Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610540037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610556953Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610571670Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610584428Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.613334 env[1446]: time="2024-12-13T14:06:16.610597185Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614255261Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614366159Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614611511Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614638425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614653862Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614699253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614713170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614724928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614736926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.616052 env[1446]: time="2024-12-13T14:06:16.614749323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.618805 env[1446]: time="2024-12-13T14:06:16.618489063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.618805 env[1446]: time="2024-12-13T14:06:16.618531455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.618805 env[1446]: time="2024-12-13T14:06:16.618549211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.618805 env[1446]: time="2024-12-13T14:06:16.618567127Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:06:16.618805 env[1446]: time="2024-12-13T14:06:16.618717658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.618805 env[1446]: time="2024-12-13T14:06:16.618734334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.618805 env[1446]: time="2024-12-13T14:06:16.618747812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.619815 env[1446]: time="2024-12-13T14:06:16.618778926Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:06:16.619815 env[1446]: time="2024-12-13T14:06:16.619048832Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:06:16.619815 env[1446]: time="2024-12-13T14:06:16.619064589Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:06:16.619815 env[1446]: time="2024-12-13T14:06:16.619082665Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:06:16.619815 env[1446]: time="2024-12-13T14:06:16.619119218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:06:16.619963 env[1446]: time="2024-12-13T14:06:16.619321018Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:06:16.619963 env[1446]: time="2024-12-13T14:06:16.619374688Z" level=info msg="Connect containerd service" Dec 13 14:06:16.619963 env[1446]: time="2024-12-13T14:06:16.619411240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.622354138Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.622510387Z" level=info msg="Start subscribing containerd event" Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.622594370Z" level=info msg="Start recovering state" Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.622662837Z" level=info msg="Start event monitor" Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.622681793Z" level=info msg="Start snapshots syncer" Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.622692871Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.622706668Z" level=info msg="Start streaming server" Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.624001052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.624056801Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:06:16.663910 env[1446]: time="2024-12-13T14:06:16.624103431Z" level=info msg="containerd successfully booted in 0.817757s" Dec 13 14:06:17.014220 extend-filesystems[1424]: Old size kept for /dev/sda9 Dec 13 14:06:17.014220 extend-filesystems[1424]: Found sr0 Dec 13 14:06:16.624095 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:06:17.031675 tar[1442]: linux-arm64/LICENSE Dec 13 14:06:17.031675 tar[1442]: linux-arm64/README.md Dec 13 14:06:16.624403 systemd[1]: Finished extend-filesystems.service. Dec 13 14:06:16.630113 systemd[1]: Started containerd.service. Dec 13 14:06:16.819892 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:06:17.036012 systemd[1]: Finished prepare-helm.service. Dec 13 14:06:17.061535 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:06:17.062484 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:06:17.093159 dbus-daemon[1422]: [system] SELinux support is enabled Dec 13 14:06:17.093851 systemd[1]: Started dbus.service. Dec 13 14:06:17.102190 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:06:17.102216 systemd[1]: Reached target system-config.target. Dec 13 14:06:17.108386 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:06:17.108906 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:06:17.109015 systemd[1]: Reached target user-config.target. Dec 13 14:06:17.114092 systemd[1]: Started systemd-logind.service. Dec 13 14:06:17.477916 update_engine[1437]: I1213 14:06:17.474542 1437 main.cc:92] Flatcar Update Engine starting Dec 13 14:06:17.495051 systemd[1]: Started update-engine.service. Dec 13 14:06:17.499937 update_engine[1437]: I1213 14:06:17.495079 1437 update_check_scheduler.cc:74] Next update check in 8m26s Dec 13 14:06:17.504641 systemd[1]: Started locksmithd.service. Dec 13 14:06:19.710558 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:06:19.728901 systemd[1]: Finished sshd-keygen.service. Dec 13 14:06:19.737291 systemd[1]: Starting issuegen.service... Dec 13 14:06:19.743631 systemd[1]: Started waagent.service. Dec 13 14:06:19.749462 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:06:19.749654 systemd[1]: Finished issuegen.service. Dec 13 14:06:19.756219 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:06:20.128927 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:06:20.136276 systemd[1]: Started getty@tty1.service. Dec 13 14:06:20.143026 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:06:20.148295 systemd[1]: Reached target getty.target. Dec 13 14:06:20.152855 systemd[1]: Reached target multi-user.target. Dec 13 14:06:20.159417 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:06:20.167006 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:06:20.172871 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:06:20.173040 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:06:20.181518 systemd[1]: Startup finished in 805ms (kernel) + 17.407s (initrd) + 40.514s (userspace) = 58.727s. Dec 13 14:06:23.913057 login[1557]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Dec 13 14:06:23.949808 login[1556]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:06:24.117228 systemd[1]: Created slice user-500.slice. Dec 13 14:06:24.118363 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:06:24.120904 systemd-logind[1434]: New session 1 of user core. Dec 13 14:06:24.259982 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:06:24.261527 systemd[1]: Starting user@500.service... Dec 13 14:06:24.303994 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:24.914885 login[1557]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:06:24.919736 systemd-logind[1434]: New session 2 of user core. Dec 13 14:06:24.981945 systemd[1560]: Queued start job for default target default.target. Dec 13 14:06:24.983198 systemd[1560]: Reached target paths.target. Dec 13 14:06:24.983381 systemd[1560]: Reached target sockets.target. Dec 13 14:06:24.983469 systemd[1560]: Reached target timers.target. Dec 13 14:06:24.983540 systemd[1560]: Reached target basic.target. Dec 13 14:06:24.983700 systemd[1]: Started user@500.service. Dec 13 14:06:24.984514 systemd[1]: Started session-1.scope. Dec 13 14:06:24.985035 systemd[1]: Started session-2.scope. Dec 13 14:06:24.987369 systemd[1560]: Reached target default.target. Dec 13 14:06:24.987424 systemd[1560]: Startup finished in 676ms. Dec 13 14:06:26.616923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:06:26.617098 systemd[1]: Stopped kubelet.service. Dec 13 14:06:26.618548 systemd[1]: Starting kubelet.service... Dec 13 14:06:26.722113 systemd[1]: Started kubelet.service. Dec 13 14:06:26.801848 kubelet[1586]: E1213 14:06:26.801794 1586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:26.804731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:26.804876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:27.400608 waagent[1554]: 2024-12-13T14:06:27.400494Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:06:27.411891 waagent[1554]: 2024-12-13T14:06:27.411794Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:06:27.416945 waagent[1554]: 2024-12-13T14:06:27.416853Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:06:27.421782 waagent[1554]: 2024-12-13T14:06:27.421639Z INFO Daemon Daemon Run daemon Dec 13 14:06:27.426443 waagent[1554]: 2024-12-13T14:06:27.426362Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:06:27.444161 waagent[1554]: 2024-12-13T14:06:27.444000Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:06:27.459987 waagent[1554]: 2024-12-13T14:06:27.459837Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:06:27.470102 waagent[1554]: 2024-12-13T14:06:27.470009Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:06:27.475282 waagent[1554]: 2024-12-13T14:06:27.475192Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:06:27.481313 waagent[1554]: 2024-12-13T14:06:27.481233Z INFO Daemon Daemon Activate resource disk Dec 13 14:06:27.486251 waagent[1554]: 2024-12-13T14:06:27.486172Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:06:27.500990 waagent[1554]: 2024-12-13T14:06:27.500896Z INFO Daemon Daemon Found device: None Dec 13 14:06:27.506072 waagent[1554]: 2024-12-13T14:06:27.505990Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:06:27.514959 waagent[1554]: 2024-12-13T14:06:27.514878Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:06:27.527016 waagent[1554]: 2024-12-13T14:06:27.526949Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:06:27.533142 waagent[1554]: 2024-12-13T14:06:27.533064Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:06:27.546826 waagent[1554]: 2024-12-13T14:06:27.546640Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:06:27.562296 waagent[1554]: 2024-12-13T14:06:27.562146Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:06:27.572304 waagent[1554]: 2024-12-13T14:06:27.572217Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:06:27.577873 waagent[1554]: 2024-12-13T14:06:27.577790Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:06:27.621608 waagent[1554]: 2024-12-13T14:06:27.621463Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:06:27.646869 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:06:27.660231 waagent[1554]: 2024-12-13T14:06:27.660006Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:06:27.666049 waagent[1554]: 2024-12-13T14:06:27.665946Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:06:27.673158 waagent[1554]: 2024-12-13T14:06:27.673059Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:06:27.680412 waagent[1554]: 2024-12-13T14:06:27.680320Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:06:27.686268 waagent[1554]: 2024-12-13T14:06:27.686180Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:06:27.691792 waagent[1554]: 2024-12-13T14:06:27.691692Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:06:27.729423 waagent[1554]: 2024-12-13T14:06:27.729344Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:06:27.737145 waagent[1554]: 2024-12-13T14:06:27.737093Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:06:27.742800 waagent[1554]: 2024-12-13T14:06:27.742706Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:06:28.316551 waagent[1554]: 2024-12-13T14:06:28.316383Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:06:28.335906 waagent[1554]: 2024-12-13T14:06:28.335815Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:06:28.342579 waagent[1554]: 2024-12-13T14:06:28.342491Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:06:28.427086 waagent[1554]: 2024-12-13T14:06:28.426927Z INFO Daemon Daemon Found private key matching thumbprint 5D19F95DEE764203F8F867C98AE8E319B46C45F1 Dec 13 14:06:28.436273 waagent[1554]: 2024-12-13T14:06:28.436185Z INFO Daemon Daemon Certificate with thumbprint AEF4012AF6635F018E2F097D96AE8A842D7A70AF has no matching private key. Dec 13 14:06:28.447154 waagent[1554]: 2024-12-13T14:06:28.447069Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:06:28.507938 waagent[1554]: 2024-12-13T14:06:28.507878Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: d2ef0670-1db7-41f8-8901-03adf259fad5 New eTag: 2593858225434908864] Dec 13 14:06:28.518872 waagent[1554]: 2024-12-13T14:06:28.518789Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:06:28.535701 waagent[1554]: 2024-12-13T14:06:28.535635Z INFO Daemon Daemon Starting provisioning Dec 13 14:06:28.540858 waagent[1554]: 2024-12-13T14:06:28.540770Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:06:28.545977 waagent[1554]: 2024-12-13T14:06:28.545899Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-fa37d69d59] Dec 13 14:06:28.563112 waagent[1554]: 2024-12-13T14:06:28.562973Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-fa37d69d59] Dec 13 14:06:28.570299 waagent[1554]: 2024-12-13T14:06:28.570159Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:06:28.577996 waagent[1554]: 2024-12-13T14:06:28.577911Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:06:28.595104 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:06:28.595267 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:06:28.595323 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:06:28.595574 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:06:28.601809 systemd-networkd[1217]: eth0: DHCPv6 lease lost Dec 13 14:06:28.603533 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:06:28.603715 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:06:28.605906 systemd[1]: Starting systemd-networkd.service... Dec 13 14:06:28.634223 systemd-networkd[1614]: enP16931s1: Link UP Dec 13 14:06:28.634235 systemd-networkd[1614]: enP16931s1: Gained carrier Dec 13 14:06:28.635194 systemd-networkd[1614]: eth0: Link UP Dec 13 14:06:28.635206 systemd-networkd[1614]: eth0: Gained carrier Dec 13 14:06:28.635514 systemd-networkd[1614]: lo: Link UP Dec 13 14:06:28.635523 systemd-networkd[1614]: lo: Gained carrier Dec 13 14:06:28.635748 systemd-networkd[1614]: eth0: Gained IPv6LL Dec 13 14:06:28.636209 systemd-networkd[1614]: Enumeration completed Dec 13 14:06:28.636339 systemd[1]: Started systemd-networkd.service. Dec 13 14:06:28.637369 systemd-networkd[1614]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:28.638180 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:06:28.648458 waagent[1554]: 2024-12-13T14:06:28.642145Z INFO Daemon Daemon Create user account if not exists Dec 13 14:06:28.650646 waagent[1554]: 2024-12-13T14:06:28.650538Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:06:28.657452 waagent[1554]: 2024-12-13T14:06:28.657356Z INFO Daemon Daemon Configure sudoer Dec 13 14:06:28.657877 systemd-networkd[1614]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:06:28.663260 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:06:28.663848 waagent[1554]: 2024-12-13T14:06:28.663715Z INFO Daemon Daemon Configure sshd Dec 13 14:06:28.668701 waagent[1554]: 2024-12-13T14:06:28.668608Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:06:29.776857 waagent[1554]: 2024-12-13T14:06:29.776745Z INFO Daemon Daemon Provisioning complete Dec 13 14:06:29.798817 waagent[1554]: 2024-12-13T14:06:29.798720Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:06:29.805924 waagent[1554]: 2024-12-13T14:06:29.805830Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:06:29.817196 waagent[1554]: 2024-12-13T14:06:29.817110Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:06:30.141912 waagent[1623]: 2024-12-13T14:06:30.141813Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:06:30.143055 waagent[1623]: 2024-12-13T14:06:30.142993Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.143311 waagent[1623]: 2024-12-13T14:06:30.143260Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.156560 waagent[1623]: 2024-12-13T14:06:30.156455Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:06:30.156937 waagent[1623]: 2024-12-13T14:06:30.156885Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:06:30.236685 waagent[1623]: 2024-12-13T14:06:30.236540Z INFO ExtHandler ExtHandler Found private key matching thumbprint 5D19F95DEE764203F8F867C98AE8E319B46C45F1 Dec 13 14:06:30.237118 waagent[1623]: 2024-12-13T14:06:30.237059Z INFO ExtHandler ExtHandler Certificate with thumbprint AEF4012AF6635F018E2F097D96AE8A842D7A70AF has no matching private key. Dec 13 14:06:30.237437 waagent[1623]: 2024-12-13T14:06:30.237387Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:06:30.255405 waagent[1623]: 2024-12-13T14:06:30.255345Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 8eef3588-bb4b-4c1c-b841-66f1af9db362 New eTag: 2593858225434908864] Dec 13 14:06:30.256198 waagent[1623]: 2024-12-13T14:06:30.256133Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:06:30.301228 waagent[1623]: 2024-12-13T14:06:30.301085Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:06:30.311945 waagent[1623]: 2024-12-13T14:06:30.311849Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1623 Dec 13 14:06:30.315974 waagent[1623]: 2024-12-13T14:06:30.315891Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:06:30.317517 waagent[1623]: 2024-12-13T14:06:30.317447Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:06:30.352895 waagent[1623]: 2024-12-13T14:06:30.352828Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:06:30.353501 waagent[1623]: 2024-12-13T14:06:30.353443Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:06:30.361876 waagent[1623]: 2024-12-13T14:06:30.361811Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:06:30.362644 waagent[1623]: 2024-12-13T14:06:30.362581Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:06:30.364030 waagent[1623]: 2024-12-13T14:06:30.363962Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:06:30.365612 waagent[1623]: 2024-12-13T14:06:30.365540Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:06:30.365928 waagent[1623]: 2024-12-13T14:06:30.365853Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.366487 waagent[1623]: 2024-12-13T14:06:30.366413Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.367133 waagent[1623]: 2024-12-13T14:06:30.367064Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:06:30.367478 waagent[1623]: 2024-12-13T14:06:30.367418Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:06:30.367478 waagent[1623]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:06:30.367478 waagent[1623]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:06:30.367478 waagent[1623]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:06:30.367478 waagent[1623]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.367478 waagent[1623]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.367478 waagent[1623]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.369989 waagent[1623]: 2024-12-13T14:06:30.369809Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:06:30.370338 waagent[1623]: 2024-12-13T14:06:30.370263Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.371162 waagent[1623]: 2024-12-13T14:06:30.371085Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.371794 waagent[1623]: 2024-12-13T14:06:30.371707Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:06:30.372016 waagent[1623]: 2024-12-13T14:06:30.371943Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:06:30.372197 waagent[1623]: 2024-12-13T14:06:30.372130Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:06:30.372278 waagent[1623]: 2024-12-13T14:06:30.372218Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:06:30.372438 waagent[1623]: 2024-12-13T14:06:30.372386Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:06:30.374192 waagent[1623]: 2024-12-13T14:06:30.374073Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:06:30.374416 waagent[1623]: 2024-12-13T14:06:30.374350Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:06:30.374639 waagent[1623]: 2024-12-13T14:06:30.374578Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:06:30.387872 waagent[1623]: 2024-12-13T14:06:30.387788Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:06:30.390631 waagent[1623]: 2024-12-13T14:06:30.390105Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:06:30.393638 waagent[1623]: 2024-12-13T14:06:30.393219Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:06:30.396028 waagent[1623]: 2024-12-13T14:06:30.395949Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1614' Dec 13 14:06:30.416000 waagent[1623]: 2024-12-13T14:06:30.415910Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:06:30.416000 waagent[1623]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:06:30.416000 waagent[1623]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:06:30.416000 waagent[1623]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f5:17:d8 brd ff:ff:ff:ff:ff:ff Dec 13 14:06:30.416000 waagent[1623]: 3: enP16931s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f5:17:d8 brd ff:ff:ff:ff:ff:ff\ altname enP16931p0s2 Dec 13 14:06:30.416000 waagent[1623]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:06:30.416000 waagent[1623]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:06:30.416000 waagent[1623]: 2: eth0 inet 10.200.20.32/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:06:30.416000 waagent[1623]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:06:30.416000 waagent[1623]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:06:30.416000 waagent[1623]: 2: eth0 inet6 fe80::20d:3aff:fef5:17d8/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:06:30.431969 waagent[1623]: 2024-12-13T14:06:30.431900Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:06:30.573195 waagent[1623]: 2024-12-13T14:06:30.573028Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Dec 13 14:06:30.578717 waagent[1623]: 2024-12-13T14:06:30.578610Z INFO EnvHandler ExtHandler Firewall rules: Dec 13 14:06:30.578717 waagent[1623]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.578717 waagent[1623]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.578717 waagent[1623]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.578717 waagent[1623]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.578717 waagent[1623]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.578717 waagent[1623]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.578717 waagent[1623]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:06:30.578717 waagent[1623]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:06:30.582279 waagent[1623]: 2024-12-13T14:06:30.582197Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:06:30.798963 waagent[1623]: 2024-12-13T14:06:30.798837Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:06:31.821937 waagent[1554]: 2024-12-13T14:06:31.821748Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:06:31.828505 waagent[1554]: 2024-12-13T14:06:31.828437Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:06:33.101818 waagent[1663]: 2024-12-13T14:06:33.101696Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:06:33.103029 waagent[1663]: 2024-12-13T14:06:33.102958Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:06:33.103281 waagent[1663]: 2024-12-13T14:06:33.103232Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:06:33.103501 waagent[1663]: 2024-12-13T14:06:33.103454Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 13 14:06:33.112447 waagent[1663]: 2024-12-13T14:06:33.112304Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:06:33.113159 waagent[1663]: 2024-12-13T14:06:33.113094Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:33.113424 waagent[1663]: 2024-12-13T14:06:33.113374Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:33.128263 waagent[1663]: 2024-12-13T14:06:33.128155Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:06:33.142019 waagent[1663]: 2024-12-13T14:06:33.141948Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:06:33.143422 waagent[1663]: 2024-12-13T14:06:33.143352Z INFO ExtHandler Dec 13 14:06:33.143742 waagent[1663]: 2024-12-13T14:06:33.143689Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3b69d56b-bca3-4056-9ba8-98b1eacd3b11 eTag: 2593858225434908864 source: Fabric] Dec 13 14:06:33.144728 waagent[1663]: 2024-12-13T14:06:33.144665Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:06:33.146226 waagent[1663]: 2024-12-13T14:06:33.146157Z INFO ExtHandler Dec 13 14:06:33.146460 waagent[1663]: 2024-12-13T14:06:33.146410Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:06:33.157014 waagent[1663]: 2024-12-13T14:06:33.156945Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:06:33.157753 waagent[1663]: 2024-12-13T14:06:33.157698Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:06:33.180086 waagent[1663]: 2024-12-13T14:06:33.180010Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:06:33.262575 waagent[1663]: 2024-12-13T14:06:33.262429Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AEF4012AF6635F018E2F097D96AE8A842D7A70AF', 'hasPrivateKey': False} Dec 13 14:06:33.267741 waagent[1663]: 2024-12-13T14:06:33.267647Z INFO ExtHandler Downloaded certificate {'thumbprint': '5D19F95DEE764203F8F867C98AE8E319B46C45F1', 'hasPrivateKey': True} Dec 13 14:06:33.269116 waagent[1663]: 2024-12-13T14:06:33.269047Z INFO ExtHandler Fetch goal state completed Dec 13 14:06:33.292449 waagent[1663]: 2024-12-13T14:06:33.292321Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:06:33.306200 waagent[1663]: 2024-12-13T14:06:33.306079Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1663 Dec 13 14:06:33.310636 waagent[1663]: 2024-12-13T14:06:33.310532Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:06:33.312116 waagent[1663]: 2024-12-13T14:06:33.312047Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:06:33.312565 waagent[1663]: 2024-12-13T14:06:33.312508Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:06:33.315107 waagent[1663]: 2024-12-13T14:06:33.315036Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:06:33.321018 waagent[1663]: 2024-12-13T14:06:33.320959Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:06:33.321612 waagent[1663]: 2024-12-13T14:06:33.321550Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:06:33.330642 waagent[1663]: 2024-12-13T14:06:33.330579Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:06:33.331474 waagent[1663]: 2024-12-13T14:06:33.331403Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:06:33.350603 waagent[1663]: 2024-12-13T14:06:33.350458Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Dec 13 14:06:33.354404 waagent[1663]: 2024-12-13T14:06:33.354201Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Dec 13 14:06:33.355995 waagent[1663]: 2024-12-13T14:06:33.355882Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:06:33.358137 waagent[1663]: 2024-12-13T14:06:33.358041Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:06:33.358416 waagent[1663]: 2024-12-13T14:06:33.358334Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:33.358775 waagent[1663]: 2024-12-13T14:06:33.358690Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:33.359935 waagent[1663]: 2024-12-13T14:06:33.359838Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:06:33.360329 waagent[1663]: 2024-12-13T14:06:33.360252Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:06:33.360329 waagent[1663]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:06:33.360329 waagent[1663]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:06:33.360329 waagent[1663]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:06:33.360329 waagent[1663]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:33.360329 waagent[1663]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:33.360329 waagent[1663]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:33.363365 waagent[1663]: 2024-12-13T14:06:33.363208Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:06:33.363591 waagent[1663]: 2024-12-13T14:06:33.363511Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:33.363942 waagent[1663]: 2024-12-13T14:06:33.363855Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:33.367370 waagent[1663]: 2024-12-13T14:06:33.367203Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:06:33.367611 waagent[1663]: 2024-12-13T14:06:33.367549Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:06:33.367736 waagent[1663]: 2024-12-13T14:06:33.367683Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:06:33.371171 waagent[1663]: 2024-12-13T14:06:33.370963Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:06:33.371688 waagent[1663]: 2024-12-13T14:06:33.371584Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:06:33.373157 waagent[1663]: 2024-12-13T14:06:33.373083Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:06:33.374589 waagent[1663]: 2024-12-13T14:06:33.372973Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:06:33.380707 waagent[1663]: 2024-12-13T14:06:33.377341Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:06:33.391286 waagent[1663]: 2024-12-13T14:06:33.391169Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:06:33.391286 waagent[1663]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:06:33.391286 waagent[1663]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:06:33.391286 waagent[1663]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f5:17:d8 brd ff:ff:ff:ff:ff:ff Dec 13 14:06:33.391286 waagent[1663]: 3: enP16931s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f5:17:d8 brd ff:ff:ff:ff:ff:ff\ altname enP16931p0s2 Dec 13 14:06:33.391286 waagent[1663]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:06:33.391286 waagent[1663]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:06:33.391286 waagent[1663]: 2: eth0 inet 10.200.20.32/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:06:33.391286 waagent[1663]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:06:33.391286 waagent[1663]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:06:33.391286 waagent[1663]: 2: eth0 inet6 fe80::20d:3aff:fef5:17d8/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:06:33.395904 waagent[1663]: 2024-12-13T14:06:33.395750Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:06:33.421003 waagent[1663]: 2024-12-13T14:06:33.420906Z INFO ExtHandler ExtHandler Dec 13 14:06:33.422776 waagent[1663]: 2024-12-13T14:06:33.422676Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 797e45b4-bd78-4b64-a05d-210498af5e60 correlation ab90d6f8-09ef-4cf5-8df6-5f9297063fa4 created: 2024-12-13T14:04:38.821533Z] Dec 13 14:06:33.430650 waagent[1663]: 2024-12-13T14:06:33.430497Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:06:33.443619 waagent[1663]: 2024-12-13T14:06:33.443505Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 22 ms] Dec 13 14:06:33.472339 waagent[1663]: 2024-12-13T14:06:33.472241Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:06:33.472339 waagent[1663]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:33.472339 waagent[1663]: pkts bytes target prot opt in out source destination Dec 13 14:06:33.472339 waagent[1663]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:33.472339 waagent[1663]: pkts bytes target prot opt in out source destination Dec 13 14:06:33.472339 waagent[1663]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:33.472339 waagent[1663]: pkts bytes target prot opt in out source destination Dec 13 14:06:33.472339 waagent[1663]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:06:33.472339 waagent[1663]: 163 18774 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:06:33.472339 waagent[1663]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:06:33.491736 waagent[1663]: 2024-12-13T14:06:33.491639Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:06:33.511509 waagent[1663]: 2024-12-13T14:06:33.511422Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FD959E32-92FB-4190-98A9-0459AB96AC1F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:06:36.866961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:06:36.867138 systemd[1]: Stopped kubelet.service. Dec 13 14:06:36.868639 systemd[1]: Starting kubelet.service... Dec 13 14:06:37.038702 systemd[1]: Started kubelet.service. Dec 13 14:06:37.084131 kubelet[1707]: E1213 14:06:37.084073 1707 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:37.086506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:37.086632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:47.116940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:06:47.117112 systemd[1]: Stopped kubelet.service. Dec 13 14:06:47.118510 systemd[1]: Starting kubelet.service... Dec 13 14:06:47.382599 systemd[1]: Started kubelet.service. Dec 13 14:06:47.429234 kubelet[1717]: E1213 14:06:47.429182 1717 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:47.431468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:47.431585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:50.180776 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 14:06:57.616964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:06:57.617122 systemd[1]: Stopped kubelet.service. Dec 13 14:06:57.618513 systemd[1]: Starting kubelet.service... Dec 13 14:06:57.882050 systemd[1]: Started kubelet.service. Dec 13 14:06:57.920150 kubelet[1728]: E1213 14:06:57.920111 1728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:57.922356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:57.922475 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:02.488788 update_engine[1437]: I1213 14:07:02.488581 1437 update_attempter.cc:509] Updating boot flags... Dec 13 14:07:08.116901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:07:08.117058 systemd[1]: Stopped kubelet.service. Dec 13 14:07:08.118507 systemd[1]: Starting kubelet.service... Dec 13 14:07:08.313636 systemd[1]: Started kubelet.service. Dec 13 14:07:08.357489 kubelet[1777]: E1213 14:07:08.357451 1777 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:08.359746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:08.359896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:18.366953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 14:07:18.367118 systemd[1]: Stopped kubelet.service. Dec 13 14:07:18.368663 systemd[1]: Starting kubelet.service... Dec 13 14:07:18.594652 systemd[1]: Created slice system-sshd.slice. Dec 13 14:07:18.595982 systemd[1]: Started sshd@0-10.200.20.32:22-10.200.16.10:59738.service. Dec 13 14:07:18.678302 systemd[1]: Started kubelet.service. Dec 13 14:07:18.718566 kubelet[1790]: E1213 14:07:18.718522 1790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:18.720859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:18.720985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:19.214204 sshd[1787]: Accepted publickey for core from 10.200.16.10 port 59738 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:19.220264 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:19.224704 systemd[1]: Started session-3.scope. Dec 13 14:07:19.225111 systemd-logind[1434]: New session 3 of user core. Dec 13 14:07:19.599206 systemd[1]: Started sshd@1-10.200.20.32:22-10.200.16.10:59750.service. Dec 13 14:07:20.001112 sshd[1802]: Accepted publickey for core from 10.200.16.10 port 59750 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:20.002676 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:20.006737 systemd[1]: Started session-4.scope. Dec 13 14:07:20.007137 systemd-logind[1434]: New session 4 of user core. Dec 13 14:07:20.310457 sshd[1802]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:20.313458 systemd[1]: sshd@1-10.200.20.32:22-10.200.16.10:59750.service: Deactivated successfully. Dec 13 14:07:20.314187 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:07:20.314728 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:07:20.315705 systemd-logind[1434]: Removed session 4. Dec 13 14:07:20.380145 systemd[1]: Started sshd@2-10.200.20.32:22-10.200.16.10:59756.service. Dec 13 14:07:20.781106 sshd[1808]: Accepted publickey for core from 10.200.16.10 port 59756 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:20.783272 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:20.787203 systemd-logind[1434]: New session 5 of user core. Dec 13 14:07:20.787641 systemd[1]: Started session-5.scope. Dec 13 14:07:21.085323 sshd[1808]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:21.087798 systemd[1]: sshd@2-10.200.20.32:22-10.200.16.10:59756.service: Deactivated successfully. Dec 13 14:07:21.088464 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:07:21.089015 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:07:21.089854 systemd-logind[1434]: Removed session 5. Dec 13 14:07:21.156953 systemd[1]: Started sshd@3-10.200.20.32:22-10.200.16.10:59768.service. Dec 13 14:07:21.566029 sshd[1814]: Accepted publickey for core from 10.200.16.10 port 59768 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:21.567346 sshd[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:21.571566 systemd-logind[1434]: New session 6 of user core. Dec 13 14:07:21.572107 systemd[1]: Started session-6.scope. Dec 13 14:07:21.902756 sshd[1814]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:21.905292 systemd[1]: sshd@3-10.200.20.32:22-10.200.16.10:59768.service: Deactivated successfully. Dec 13 14:07:21.905983 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:07:21.906479 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:07:21.907312 systemd-logind[1434]: Removed session 6. Dec 13 14:07:21.972727 systemd[1]: Started sshd@4-10.200.20.32:22-10.200.16.10:59784.service. Dec 13 14:07:22.394040 sshd[1820]: Accepted publickey for core from 10.200.16.10 port 59784 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:22.395436 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:22.399668 systemd-logind[1434]: New session 7 of user core. Dec 13 14:07:22.400192 systemd[1]: Started session-7.scope. Dec 13 14:07:22.693106 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:07:22.693699 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:07:22.720105 systemd[1]: Starting docker.service... Dec 13 14:07:22.763082 env[1833]: time="2024-12-13T14:07:22.763027752Z" level=info msg="Starting up" Dec 13 14:07:22.764288 env[1833]: time="2024-12-13T14:07:22.764257788Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:07:22.764378 env[1833]: time="2024-12-13T14:07:22.764365228Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:07:22.764446 env[1833]: time="2024-12-13T14:07:22.764431348Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:07:22.764511 env[1833]: time="2024-12-13T14:07:22.764498548Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:07:22.766060 env[1833]: time="2024-12-13T14:07:22.766035423Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:07:22.766152 env[1833]: time="2024-12-13T14:07:22.766139503Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:07:22.766211 env[1833]: time="2024-12-13T14:07:22.766196783Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:07:22.766259 env[1833]: time="2024-12-13T14:07:22.766247142Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:07:22.771109 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1476445727-merged.mount: Deactivated successfully. Dec 13 14:07:22.842702 env[1833]: time="2024-12-13T14:07:22.842659073Z" level=info msg="Loading containers: start." Dec 13 14:07:22.944855 kernel: Initializing XFRM netlink socket Dec 13 14:07:22.959162 env[1833]: time="2024-12-13T14:07:22.959126962Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:07:23.022613 systemd-networkd[1614]: docker0: Link UP Dec 13 14:07:23.049150 env[1833]: time="2024-12-13T14:07:23.049107455Z" level=info msg="Loading containers: done." Dec 13 14:07:23.059528 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3196756897-merged.mount: Deactivated successfully. Dec 13 14:07:23.073074 env[1833]: time="2024-12-13T14:07:23.073021385Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:07:23.073441 env[1833]: time="2024-12-13T14:07:23.073422744Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:07:23.073612 env[1833]: time="2024-12-13T14:07:23.073597464Z" level=info msg="Daemon has completed initialization" Dec 13 14:07:23.099583 systemd[1]: Started docker.service. Dec 13 14:07:23.108218 env[1833]: time="2024-12-13T14:07:23.108139202Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:07:27.889347 env[1446]: time="2024-12-13T14:07:27.889294821Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 14:07:28.658238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554271313.mount: Deactivated successfully. Dec 13 14:07:28.866895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 14:07:28.867076 systemd[1]: Stopped kubelet.service. Dec 13 14:07:28.868453 systemd[1]: Starting kubelet.service... Dec 13 14:07:29.007366 systemd[1]: Started kubelet.service. Dec 13 14:07:29.048567 kubelet[1959]: E1213 14:07:29.048514 1959 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:29.050701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:29.050846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:31.248718 env[1446]: time="2024-12-13T14:07:31.248517535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:31.253882 env[1446]: time="2024-12-13T14:07:31.253828002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:31.258533 env[1446]: time="2024-12-13T14:07:31.258503271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:31.263852 env[1446]: time="2024-12-13T14:07:31.263816019Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:31.264736 env[1446]: time="2024-12-13T14:07:31.264706616Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 14:07:31.273392 env[1446]: time="2024-12-13T14:07:31.273366676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 14:07:33.899792 env[1446]: time="2024-12-13T14:07:33.899729462Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.906135 env[1446]: time="2024-12-13T14:07:33.906099607Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.911354 env[1446]: time="2024-12-13T14:07:33.911320195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.915284 env[1446]: time="2024-12-13T14:07:33.915230867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.916038 env[1446]: time="2024-12-13T14:07:33.916010065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 14:07:33.924893 env[1446]: time="2024-12-13T14:07:33.924859245Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 14:07:35.575963 env[1446]: time="2024-12-13T14:07:35.575913701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.583329 env[1446]: time="2024-12-13T14:07:35.583283445Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.589033 env[1446]: time="2024-12-13T14:07:35.588996513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.592866 env[1446]: time="2024-12-13T14:07:35.592820385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.593610 env[1446]: time="2024-12-13T14:07:35.593582423Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 14:07:35.602190 env[1446]: time="2024-12-13T14:07:35.602157605Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:07:37.021219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909616880.mount: Deactivated successfully. Dec 13 14:07:39.116942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 14:07:39.117109 systemd[1]: Stopped kubelet.service. Dec 13 14:07:39.118564 systemd[1]: Starting kubelet.service... Dec 13 14:07:39.930152 systemd[1]: Started kubelet.service. Dec 13 14:07:39.974382 kubelet[1986]: E1213 14:07:39.974326 1986 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:39.976646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:39.976789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:43.161811 env[1446]: time="2024-12-13T14:07:43.161388407Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:43.166840 env[1446]: time="2024-12-13T14:07:43.166806358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:43.171500 env[1446]: time="2024-12-13T14:07:43.171465789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:43.174250 env[1446]: time="2024-12-13T14:07:43.174215505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:43.174768 env[1446]: time="2024-12-13T14:07:43.174726024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 14:07:43.184099 env[1446]: time="2024-12-13T14:07:43.184039847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:07:50.116967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 14:07:50.117140 systemd[1]: Stopped kubelet.service. Dec 13 14:07:50.118589 systemd[1]: Starting kubelet.service... Dec 13 14:07:50.629916 systemd[1]: Started kubelet.service. Dec 13 14:07:50.669332 kubelet[2002]: E1213 14:07:50.669277 2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:50.671472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:50.671589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:51.662993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133093491.mount: Deactivated successfully. Dec 13 14:07:52.631212 env[1446]: time="2024-12-13T14:07:52.631165274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.636728 env[1446]: time="2024-12-13T14:07:52.636690426Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.640320 env[1446]: time="2024-12-13T14:07:52.640290340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.644078 env[1446]: time="2024-12-13T14:07:52.644049535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.644815 env[1446]: time="2024-12-13T14:07:52.644786334Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:07:52.654113 env[1446]: time="2024-12-13T14:07:52.654078000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:07:53.205043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430109960.mount: Deactivated successfully. Dec 13 14:07:53.225328 env[1446]: time="2024-12-13T14:07:53.225268092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:53.231640 env[1446]: time="2024-12-13T14:07:53.231595123Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:53.234911 env[1446]: time="2024-12-13T14:07:53.234879798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:53.239063 env[1446]: time="2024-12-13T14:07:53.239011992Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:53.239632 env[1446]: time="2024-12-13T14:07:53.239604551Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:07:53.248392 env[1446]: time="2024-12-13T14:07:53.248353899Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 14:07:53.830561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434996249.mount: Deactivated successfully. Dec 13 14:07:56.444546 env[1446]: time="2024-12-13T14:07:56.444487208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:56.453101 env[1446]: time="2024-12-13T14:07:56.453049597Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:56.457022 env[1446]: time="2024-12-13T14:07:56.456991831Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:56.461561 env[1446]: time="2024-12-13T14:07:56.461533385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:56.462290 env[1446]: time="2024-12-13T14:07:56.462264184Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 14:08:00.866929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 14:08:00.867093 systemd[1]: Stopped kubelet.service. Dec 13 14:08:00.868510 systemd[1]: Starting kubelet.service... Dec 13 14:08:01.116520 systemd[1]: Started kubelet.service. Dec 13 14:08:01.165351 kubelet[2082]: E1213 14:08:01.165302 2082 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:08:01.167106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:08:01.167228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:08:02.365147 systemd[1]: Stopped kubelet.service. Dec 13 14:08:02.367436 systemd[1]: Starting kubelet.service... Dec 13 14:08:02.391309 systemd[1]: Reloading. Dec 13 14:08:02.471585 /usr/lib/systemd/system-generators/torcx-generator[2113]: time="2024-12-13T14:08:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:08:02.477533 /usr/lib/systemd/system-generators/torcx-generator[2113]: time="2024-12-13T14:08:02Z" level=info msg="torcx already run" Dec 13 14:08:02.552096 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:08:02.552281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:08:02.567845 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:08:02.679289 systemd[1]: Started kubelet.service. Dec 13 14:08:02.682183 systemd[1]: Stopping kubelet.service... Dec 13 14:08:02.682821 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:08:02.683008 systemd[1]: Stopped kubelet.service. Dec 13 14:08:02.684941 systemd[1]: Starting kubelet.service... Dec 13 14:08:02.967970 systemd[1]: Started kubelet.service. Dec 13 14:08:03.016722 kubelet[2180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:08:03.016722 kubelet[2180]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:08:03.016722 kubelet[2180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:08:03.017083 kubelet[2180]: I1213 14:08:03.016775 2180 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:08:04.536327 kubelet[2180]: I1213 14:08:04.536288 2180 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:08:04.536327 kubelet[2180]: I1213 14:08:04.536317 2180 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:08:04.536652 kubelet[2180]: I1213 14:08:04.536509 2180 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:08:04.547899 kubelet[2180]: E1213 14:08:04.547858 2180 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.549053 kubelet[2180]: I1213 14:08:04.549034 2180 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:08:04.557554 kubelet[2180]: I1213 14:08:04.557524 2180 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:08:04.558838 kubelet[2180]: I1213 14:08:04.558802 2180 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:08:04.559016 kubelet[2180]: I1213 14:08:04.558842 2180 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-fa37d69d59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:08:04.559109 kubelet[2180]: I1213 14:08:04.559021 2180 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:08:04.559109 kubelet[2180]: I1213 14:08:04.559030 2180 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:08:04.559159 kubelet[2180]: I1213 14:08:04.559146 2180 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:08:04.560106 kubelet[2180]: I1213 14:08:04.560089 2180 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:08:04.560159 kubelet[2180]: I1213 14:08:04.560110 2180 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:08:04.560159 kubelet[2180]: I1213 14:08:04.560142 2180 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:08:04.560159 kubelet[2180]: I1213 14:08:04.560159 2180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:08:04.562282 kubelet[2180]: I1213 14:08:04.562258 2180 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:08:04.562430 kubelet[2180]: I1213 14:08:04.562411 2180 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:08:04.562480 kubelet[2180]: W1213 14:08:04.562456 2180 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:08:04.563009 kubelet[2180]: I1213 14:08:04.562979 2180 server.go:1264] "Started kubelet" Dec 13 14:08:04.563607 kubelet[2180]: W1213 14:08:04.563543 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-fa37d69d59&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.563669 kubelet[2180]: E1213 14:08:04.563618 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-fa37d69d59&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.563720 kubelet[2180]: W1213 14:08:04.563689 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.563750 kubelet[2180]: E1213 14:08:04.563724 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.568636 kubelet[2180]: I1213 14:08:04.568591 2180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:08:04.569051 kubelet[2180]: I1213 14:08:04.569035 2180 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:08:04.578839 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:08:04.578963 kubelet[2180]: I1213 14:08:04.578936 2180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:08:04.580069 kubelet[2180]: E1213 14:08:04.579974 2180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-fa37d69d59.1810c1bf73c6c2ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-fa37d69d59,UID:ci-3510.3.6-a-fa37d69d59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-fa37d69d59,},FirstTimestamp:2024-12-13 14:08:04.56296113 +0000 UTC m=+1.590968782,LastTimestamp:2024-12-13 14:08:04.56296113 +0000 UTC m=+1.590968782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-fa37d69d59,}" Dec 13 14:08:04.581353 kubelet[2180]: I1213 14:08:04.581313 2180 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:08:04.583403 kubelet[2180]: I1213 14:08:04.583384 2180 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:08:04.586719 kubelet[2180]: E1213 14:08:04.586693 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-fa37d69d59\" not found" Dec 13 14:08:04.586891 kubelet[2180]: I1213 14:08:04.586879 2180 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:08:04.587085 kubelet[2180]: I1213 14:08:04.587068 2180 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:08:04.589701 kubelet[2180]: I1213 14:08:04.589682 2180 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:08:04.590181 kubelet[2180]: E1213 14:08:04.590150 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-fa37d69d59?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="200ms" Dec 13 14:08:04.590454 kubelet[2180]: I1213 14:08:04.590437 2180 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:08:04.590607 kubelet[2180]: I1213 14:08:04.590590 2180 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:08:04.591552 kubelet[2180]: E1213 14:08:04.591532 2180 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:08:04.592145 kubelet[2180]: I1213 14:08:04.592130 2180 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:08:04.597415 kubelet[2180]: W1213 14:08:04.597378 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.597529 kubelet[2180]: E1213 14:08:04.597517 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.618154 kubelet[2180]: I1213 14:08:04.618105 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:08:04.619281 kubelet[2180]: I1213 14:08:04.619248 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:08:04.619341 kubelet[2180]: I1213 14:08:04.619296 2180 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:08:04.619341 kubelet[2180]: I1213 14:08:04.619314 2180 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:08:04.619402 kubelet[2180]: E1213 14:08:04.619355 2180 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:08:04.620502 kubelet[2180]: W1213 14:08:04.620260 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.620502 kubelet[2180]: E1213 14:08:04.620312 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:04.694856 kubelet[2180]: I1213 14:08:04.694831 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.695393 kubelet[2180]: I1213 14:08:04.695368 2180 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:08:04.695393 kubelet[2180]: I1213 14:08:04.695385 2180 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:08:04.695524 kubelet[2180]: I1213 14:08:04.695414 2180 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:08:04.695920 kubelet[2180]: E1213 14:08:04.695895 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.700491 kubelet[2180]: I1213 14:08:04.700468 2180 policy_none.go:49] "None policy: Start" Dec 13 14:08:04.701203 kubelet[2180]: I1213 14:08:04.701174 2180 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:08:04.701203 kubelet[2180]: I1213 14:08:04.701203 2180 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:08:04.708843 systemd[1]: Created slice kubepods.slice. Dec 13 14:08:04.712958 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:08:04.715632 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:08:04.719868 kubelet[2180]: E1213 14:08:04.719847 2180 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:08:04.721478 kubelet[2180]: I1213 14:08:04.721462 2180 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:08:04.721711 kubelet[2180]: I1213 14:08:04.721680 2180 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:08:04.721888 kubelet[2180]: I1213 14:08:04.721878 2180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:08:04.723920 kubelet[2180]: E1213 14:08:04.723903 2180 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-fa37d69d59\" not found" Dec 13 14:08:04.791314 kubelet[2180]: E1213 14:08:04.791212 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-fa37d69d59?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="400ms" Dec 13 14:08:04.898327 kubelet[2180]: I1213 14:08:04.898296 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.898621 kubelet[2180]: E1213 14:08:04.898595 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.920825 kubelet[2180]: I1213 14:08:04.920789 2180 topology_manager.go:215] "Topology Admit Handler" podUID="75b16e7d565e0c7957c13d977c28a8b4" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.922099 kubelet[2180]: I1213 14:08:04.922077 2180 topology_manager.go:215] "Topology Admit Handler" podUID="4c48722eb6ac6bbc7768def9a1842faf" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.923503 kubelet[2180]: I1213 14:08:04.923481 2180 topology_manager.go:215] "Topology Admit Handler" podUID="8d0f2920eb40609df6b349af78c5b831" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.928699 systemd[1]: Created slice kubepods-burstable-pod75b16e7d565e0c7957c13d977c28a8b4.slice. Dec 13 14:08:04.947136 systemd[1]: Created slice kubepods-burstable-pod4c48722eb6ac6bbc7768def9a1842faf.slice. Dec 13 14:08:04.954049 systemd[1]: Created slice kubepods-burstable-pod8d0f2920eb40609df6b349af78c5b831.slice. Dec 13 14:08:04.992087 kubelet[2180]: I1213 14:08:04.992045 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.992278 kubelet[2180]: I1213 14:08:04.992264 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.992406 kubelet[2180]: I1213 14:08:04.992393 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.992513 kubelet[2180]: I1213 14:08:04.992501 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.992617 kubelet[2180]: I1213 14:08:04.992604 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.992718 kubelet[2180]: I1213 14:08:04.992704 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d0f2920eb40609df6b349af78c5b831-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-fa37d69d59\" (UID: \"8d0f2920eb40609df6b349af78c5b831\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.992839 kubelet[2180]: I1213 14:08:04.992827 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75b16e7d565e0c7957c13d977c28a8b4-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-fa37d69d59\" (UID: \"75b16e7d565e0c7957c13d977c28a8b4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.992942 kubelet[2180]: I1213 14:08:04.992931 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75b16e7d565e0c7957c13d977c28a8b4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-fa37d69d59\" (UID: \"75b16e7d565e0c7957c13d977c28a8b4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:04.993036 kubelet[2180]: I1213 14:08:04.993024 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75b16e7d565e0c7957c13d977c28a8b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-fa37d69d59\" (UID: \"75b16e7d565e0c7957c13d977c28a8b4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:05.191792 kubelet[2180]: E1213 14:08:05.191726 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-fa37d69d59?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="800ms" Dec 13 14:08:05.246878 env[1446]: time="2024-12-13T14:08:05.246837865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-fa37d69d59,Uid:75b16e7d565e0c7957c13d977c28a8b4,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:05.253025 env[1446]: time="2024-12-13T14:08:05.252795258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-fa37d69d59,Uid:4c48722eb6ac6bbc7768def9a1842faf,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:05.256788 env[1446]: time="2024-12-13T14:08:05.256732174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-fa37d69d59,Uid:8d0f2920eb40609df6b349af78c5b831,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:05.300634 kubelet[2180]: I1213 14:08:05.300608 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:05.301425 kubelet[2180]: E1213 14:08:05.301400 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:05.475505 kubelet[2180]: W1213 14:08:05.475208 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:05.475505 kubelet[2180]: E1213 14:08:05.475250 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:05.504024 kubelet[2180]: W1213 14:08:05.503973 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-fa37d69d59&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:05.504198 kubelet[2180]: E1213 14:08:05.504186 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-fa37d69d59&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:05.993008 kubelet[2180]: E1213 14:08:05.992958 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-fa37d69d59?timeout=10s\": dial tcp 10.200.20.32:6443: connect: connection refused" interval="1.6s" Dec 13 14:08:06.103340 kubelet[2180]: I1213 14:08:06.103314 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:06.103838 kubelet[2180]: E1213 14:08:06.103813 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:06.146572 kubelet[2180]: W1213 14:08:06.146517 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:06.150825 kubelet[2180]: E1213 14:08:06.146679 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:06.156920 kubelet[2180]: W1213 14:08:06.156890 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:06.156920 kubelet[2180]: E1213 14:08:06.156924 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:06.425483 kubelet[2180]: E1213 14:08:06.425383 2180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-fa37d69d59.1810c1bf73c6c2ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-fa37d69d59,UID:ci-3510.3.6-a-fa37d69d59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-fa37d69d59,},FirstTimestamp:2024-12-13 14:08:04.56296113 +0000 UTC m=+1.590968782,LastTimestamp:2024-12-13 14:08:04.56296113 +0000 UTC m=+1.590968782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-fa37d69d59,}" Dec 13 14:08:06.675399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965402104.mount: Deactivated successfully. Dec 13 14:08:06.701568 env[1446]: time="2024-12-13T14:08:06.701319430Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.704845 env[1446]: time="2024-12-13T14:08:06.704805906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.713850 env[1446]: time="2024-12-13T14:08:06.713814936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.718142 env[1446]: time="2024-12-13T14:08:06.718116291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.720423 env[1446]: time="2024-12-13T14:08:06.720389849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.726096 env[1446]: time="2024-12-13T14:08:06.726063482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.732148 env[1446]: time="2024-12-13T14:08:06.732121076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.736179 env[1446]: time="2024-12-13T14:08:06.736146031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.743255 env[1446]: time="2024-12-13T14:08:06.743218503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.745534 kubelet[2180]: E1213 14:08:06.745460 2180 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.32:6443: connect: connection refused Dec 13 14:08:06.747154 env[1446]: time="2024-12-13T14:08:06.747128059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.750190 env[1446]: time="2024-12-13T14:08:06.750154416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.753229 env[1446]: time="2024-12-13T14:08:06.753201772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:06.806004 env[1446]: time="2024-12-13T14:08:06.805831914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:06.806004 env[1446]: time="2024-12-13T14:08:06.805871114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:06.806004 env[1446]: time="2024-12-13T14:08:06.805880834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:06.806432 env[1446]: time="2024-12-13T14:08:06.806342153Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b04b9d40f835bde9cfed9b69f4c1000d95187431f91529f1185cf0362cbb427 pid=2218 runtime=io.containerd.runc.v2 Dec 13 14:08:06.822302 systemd[1]: Started cri-containerd-2b04b9d40f835bde9cfed9b69f4c1000d95187431f91529f1185cf0362cbb427.scope. Dec 13 14:08:06.851045 env[1446]: time="2024-12-13T14:08:06.850998903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-fa37d69d59,Uid:75b16e7d565e0c7957c13d977c28a8b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b04b9d40f835bde9cfed9b69f4c1000d95187431f91529f1185cf0362cbb427\"" Dec 13 14:08:06.854708 env[1446]: time="2024-12-13T14:08:06.854670419Z" level=info msg="CreateContainer within sandbox \"2b04b9d40f835bde9cfed9b69f4c1000d95187431f91529f1185cf0362cbb427\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:08:06.862510 env[1446]: time="2024-12-13T14:08:06.862436931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:06.862647 env[1446]: time="2024-12-13T14:08:06.862490691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:06.862647 env[1446]: time="2024-12-13T14:08:06.862501890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:06.862737 env[1446]: time="2024-12-13T14:08:06.862664970Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75931f97add343187cec5ad8f082c2b7c417f5776f5695546f28d238f8de74fa pid=2261 runtime=io.containerd.runc.v2 Dec 13 14:08:06.878027 systemd[1]: Started cri-containerd-75931f97add343187cec5ad8f082c2b7c417f5776f5695546f28d238f8de74fa.scope. Dec 13 14:08:06.881008 env[1446]: time="2024-12-13T14:08:06.880951710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:06.881161 env[1446]: time="2024-12-13T14:08:06.881137870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:06.881247 env[1446]: time="2024-12-13T14:08:06.881225670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:06.881444 env[1446]: time="2024-12-13T14:08:06.881416829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bbec0d177e1b3393df7968c065eedd88bcadf469a4781bc430b2cc908994f24 pid=2291 runtime=io.containerd.runc.v2 Dec 13 14:08:06.906638 systemd[1]: Started cri-containerd-7bbec0d177e1b3393df7968c065eedd88bcadf469a4781bc430b2cc908994f24.scope. Dec 13 14:08:06.911611 env[1446]: time="2024-12-13T14:08:06.911559836Z" level=info msg="CreateContainer within sandbox \"2b04b9d40f835bde9cfed9b69f4c1000d95187431f91529f1185cf0362cbb427\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f1a85cd6cd1d6e392098f804022facd68c105b5ce466565adc24db59fab3491\"" Dec 13 14:08:06.913044 env[1446]: time="2024-12-13T14:08:06.912188435Z" level=info msg="StartContainer for \"8f1a85cd6cd1d6e392098f804022facd68c105b5ce466565adc24db59fab3491\"" Dec 13 14:08:06.931660 env[1446]: time="2024-12-13T14:08:06.931609774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-fa37d69d59,Uid:4c48722eb6ac6bbc7768def9a1842faf,Namespace:kube-system,Attempt:0,} returns sandbox id \"75931f97add343187cec5ad8f082c2b7c417f5776f5695546f28d238f8de74fa\"" Dec 13 14:08:06.934962 env[1446]: time="2024-12-13T14:08:06.934922210Z" level=info msg="CreateContainer within sandbox \"75931f97add343187cec5ad8f082c2b7c417f5776f5695546f28d238f8de74fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:08:06.940686 systemd[1]: Started cri-containerd-8f1a85cd6cd1d6e392098f804022facd68c105b5ce466565adc24db59fab3491.scope. Dec 13 14:08:06.956905 env[1446]: time="2024-12-13T14:08:06.956807105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-fa37d69d59,Uid:8d0f2920eb40609df6b349af78c5b831,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bbec0d177e1b3393df7968c065eedd88bcadf469a4781bc430b2cc908994f24\"" Dec 13 14:08:06.963453 env[1446]: time="2024-12-13T14:08:06.963414858Z" level=info msg="CreateContainer within sandbox \"7bbec0d177e1b3393df7968c065eedd88bcadf469a4781bc430b2cc908994f24\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:08:06.969641 env[1446]: time="2024-12-13T14:08:06.969591251Z" level=info msg="CreateContainer within sandbox \"75931f97add343187cec5ad8f082c2b7c417f5776f5695546f28d238f8de74fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ddc75dc21ba63d1e89235f261e502dc1600003156fa41716fd68edd7e94dba92\"" Dec 13 14:08:06.970097 env[1446]: time="2024-12-13T14:08:06.970064051Z" level=info msg="StartContainer for \"ddc75dc21ba63d1e89235f261e502dc1600003156fa41716fd68edd7e94dba92\"" Dec 13 14:08:06.996720 systemd[1]: Started cri-containerd-ddc75dc21ba63d1e89235f261e502dc1600003156fa41716fd68edd7e94dba92.scope. Dec 13 14:08:06.999552 env[1446]: time="2024-12-13T14:08:06.999170378Z" level=info msg="StartContainer for \"8f1a85cd6cd1d6e392098f804022facd68c105b5ce466565adc24db59fab3491\" returns successfully" Dec 13 14:08:07.008366 env[1446]: time="2024-12-13T14:08:07.008316528Z" level=info msg="CreateContainer within sandbox \"7bbec0d177e1b3393df7968c065eedd88bcadf469a4781bc430b2cc908994f24\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36a9146e05215eac0a7e191091a6e7277b5bf440291c433b16f1c7ea28c83220\"" Dec 13 14:08:07.008784 env[1446]: time="2024-12-13T14:08:07.008740648Z" level=info msg="StartContainer for \"36a9146e05215eac0a7e191091a6e7277b5bf440291c433b16f1c7ea28c83220\"" Dec 13 14:08:07.039829 systemd[1]: Started cri-containerd-36a9146e05215eac0a7e191091a6e7277b5bf440291c433b16f1c7ea28c83220.scope. Dec 13 14:08:07.051802 env[1446]: time="2024-12-13T14:08:07.051742161Z" level=info msg="StartContainer for \"ddc75dc21ba63d1e89235f261e502dc1600003156fa41716fd68edd7e94dba92\" returns successfully" Dec 13 14:08:07.085474 env[1446]: time="2024-12-13T14:08:07.085420284Z" level=info msg="StartContainer for \"36a9146e05215eac0a7e191091a6e7277b5bf440291c433b16f1c7ea28c83220\" returns successfully" Dec 13 14:08:07.673226 systemd[1]: run-containerd-runc-k8s.io-2b04b9d40f835bde9cfed9b69f4c1000d95187431f91529f1185cf0362cbb427-runc.wgYoVr.mount: Deactivated successfully. Dec 13 14:08:07.705618 kubelet[2180]: I1213 14:08:07.705581 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:09.107017 kubelet[2180]: I1213 14:08:09.106963 2180 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:09.178355 kubelet[2180]: E1213 14:08:09.178304 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 13 14:08:09.563409 kubelet[2180]: I1213 14:08:09.563367 2180 apiserver.go:52] "Watching apiserver" Dec 13 14:08:09.587827 kubelet[2180]: I1213 14:08:09.587781 2180 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:08:11.112197 kubelet[2180]: W1213 14:08:11.112161 2180 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:11.405530 systemd[1]: Reloading. Dec 13 14:08:11.471490 /usr/lib/systemd/system-generators/torcx-generator[2475]: time="2024-12-13T14:08:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:08:11.471520 /usr/lib/systemd/system-generators/torcx-generator[2475]: time="2024-12-13T14:08:11Z" level=info msg="torcx already run" Dec 13 14:08:11.548824 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:08:11.548841 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:08:11.564558 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:08:11.683837 kubelet[2180]: I1213 14:08:11.683691 2180 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:08:11.685983 systemd[1]: Stopping kubelet.service... Dec 13 14:08:11.705508 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:08:11.705700 systemd[1]: Stopped kubelet.service. Dec 13 14:08:11.705749 systemd[1]: kubelet.service: Consumed 1.909s CPU time. Dec 13 14:08:11.707579 systemd[1]: Starting kubelet.service... Dec 13 14:08:11.785813 systemd[1]: Started kubelet.service. Dec 13 14:08:11.854399 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:08:11.854399 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:08:11.854399 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:08:11.854739 kubelet[2539]: I1213 14:08:11.854439 2539 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:08:11.861048 kubelet[2539]: I1213 14:08:11.861017 2539 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:08:11.861233 kubelet[2539]: I1213 14:08:11.861221 2539 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:08:11.861492 kubelet[2539]: I1213 14:08:11.861477 2539 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:08:11.862872 kubelet[2539]: I1213 14:08:11.862855 2539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:08:11.864111 kubelet[2539]: I1213 14:08:11.864084 2539 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:08:11.869654 kubelet[2539]: I1213 14:08:11.869632 2539 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:08:11.869844 kubelet[2539]: I1213 14:08:11.869818 2539 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:08:11.870000 kubelet[2539]: I1213 14:08:11.869845 2539 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-fa37d69d59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:08:11.870079 kubelet[2539]: I1213 14:08:11.870003 2539 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:08:11.870079 kubelet[2539]: I1213 14:08:11.870012 2539 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:08:11.870079 kubelet[2539]: I1213 14:08:11.870044 2539 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:08:11.870235 kubelet[2539]: I1213 14:08:11.870134 2539 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:08:11.870235 kubelet[2539]: I1213 14:08:11.870144 2539 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:08:11.870235 kubelet[2539]: I1213 14:08:11.870171 2539 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:08:11.870235 kubelet[2539]: I1213 14:08:11.870186 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:08:11.876232 kubelet[2539]: I1213 14:08:11.875038 2539 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:08:11.876232 kubelet[2539]: I1213 14:08:11.875181 2539 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:08:11.876232 kubelet[2539]: I1213 14:08:11.875772 2539 server.go:1264] "Started kubelet" Dec 13 14:08:11.882428 kubelet[2539]: I1213 14:08:11.881987 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:08:11.892049 kubelet[2539]: E1213 14:08:11.892031 2539 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:08:11.895005 kubelet[2539]: I1213 14:08:11.894972 2539 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:08:11.896894 kubelet[2539]: I1213 14:08:11.896878 2539 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:08:11.898673 kubelet[2539]: I1213 14:08:11.898620 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:08:11.898953 kubelet[2539]: I1213 14:08:11.898937 2539 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:08:11.900187 kubelet[2539]: I1213 14:08:11.900153 2539 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:08:11.900641 kubelet[2539]: I1213 14:08:11.900606 2539 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:08:11.904802 kubelet[2539]: I1213 14:08:11.904786 2539 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:08:11.916414 kubelet[2539]: I1213 14:08:11.916389 2539 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:08:11.916414 kubelet[2539]: I1213 14:08:11.916408 2539 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:08:11.916551 kubelet[2539]: I1213 14:08:11.916480 2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:08:11.918369 kubelet[2539]: I1213 14:08:11.918341 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:08:11.922392 kubelet[2539]: I1213 14:08:11.922374 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:08:11.922500 kubelet[2539]: I1213 14:08:11.922489 2539 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:08:11.922569 kubelet[2539]: I1213 14:08:11.922560 2539 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:08:11.922665 kubelet[2539]: E1213 14:08:11.922648 2539 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:08:11.957124 kubelet[2539]: I1213 14:08:11.956985 2539 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:08:11.957265 kubelet[2539]: I1213 14:08:11.957249 2539 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:08:11.957327 kubelet[2539]: I1213 14:08:11.957318 2539 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:08:11.957511 kubelet[2539]: I1213 14:08:11.957498 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:08:11.957591 kubelet[2539]: I1213 14:08:11.957568 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:08:11.957644 kubelet[2539]: I1213 14:08:11.957635 2539 policy_none.go:49] "None policy: Start" Dec 13 14:08:11.958650 kubelet[2539]: I1213 14:08:11.958625 2539 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:08:11.958781 kubelet[2539]: I1213 14:08:11.958657 2539 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:08:11.958823 kubelet[2539]: I1213 14:08:11.958798 2539 state_mem.go:75] "Updated machine memory state" Dec 13 14:08:11.962301 kubelet[2539]: I1213 14:08:11.962271 2539 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:08:11.962461 kubelet[2539]: I1213 14:08:11.962424 2539 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:08:11.962532 kubelet[2539]: I1213 14:08:11.962518 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:08:12.004051 kubelet[2539]: I1213 14:08:12.004020 2539 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.015138 kubelet[2539]: I1213 14:08:12.015114 2539 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.015365 kubelet[2539]: I1213 14:08:12.015345 2539 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.022874 kubelet[2539]: I1213 14:08:12.022836 2539 topology_manager.go:215] "Topology Admit Handler" podUID="75b16e7d565e0c7957c13d977c28a8b4" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.023103 kubelet[2539]: I1213 14:08:12.023088 2539 topology_manager.go:215] "Topology Admit Handler" podUID="4c48722eb6ac6bbc7768def9a1842faf" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.023217 kubelet[2539]: I1213 14:08:12.023203 2539 topology_manager.go:215] "Topology Admit Handler" podUID="8d0f2920eb40609df6b349af78c5b831" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.033624 kubelet[2539]: W1213 14:08:12.033600 2539 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:12.039697 kubelet[2539]: W1213 14:08:12.039661 2539 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:12.039977 kubelet[2539]: W1213 14:08:12.039954 2539 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:12.040105 kubelet[2539]: E1213 14:08:12.040089 2539 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-fa37d69d59\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.106443 kubelet[2539]: I1213 14:08:12.106411 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.106685 kubelet[2539]: I1213 14:08:12.106668 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.106832 kubelet[2539]: I1213 14:08:12.106816 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.106958 kubelet[2539]: I1213 14:08:12.106944 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75b16e7d565e0c7957c13d977c28a8b4-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-fa37d69d59\" (UID: \"75b16e7d565e0c7957c13d977c28a8b4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.107071 kubelet[2539]: I1213 14:08:12.107059 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75b16e7d565e0c7957c13d977c28a8b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-fa37d69d59\" (UID: \"75b16e7d565e0c7957c13d977c28a8b4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.107171 kubelet[2539]: I1213 14:08:12.107160 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.107302 kubelet[2539]: I1213 14:08:12.107267 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c48722eb6ac6bbc7768def9a1842faf-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-fa37d69d59\" (UID: \"4c48722eb6ac6bbc7768def9a1842faf\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.107448 kubelet[2539]: I1213 14:08:12.107429 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d0f2920eb40609df6b349af78c5b831-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-fa37d69d59\" (UID: \"8d0f2920eb40609df6b349af78c5b831\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.107574 kubelet[2539]: I1213 14:08:12.107552 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75b16e7d565e0c7957c13d977c28a8b4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-fa37d69d59\" (UID: \"75b16e7d565e0c7957c13d977c28a8b4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.417981 sudo[2569]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:08:12.418534 sudo[2569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:08:12.871785 kubelet[2539]: I1213 14:08:12.871748 2539 apiserver.go:52] "Watching apiserver" Dec 13 14:08:12.901379 kubelet[2539]: I1213 14:08:12.901337 2539 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:08:12.933791 sudo[2569]: pam_unix(sudo:session): session closed for user root Dec 13 14:08:12.956411 kubelet[2539]: W1213 14:08:12.956371 2539 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:12.956553 kubelet[2539]: E1213 14:08:12.956435 2539 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-fa37d69d59\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" Dec 13 14:08:12.968266 kubelet[2539]: I1213 14:08:12.968208 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-fa37d69d59" podStartSLOduration=0.968180316 podStartE2EDuration="968.180316ms" podCreationTimestamp="2024-12-13 14:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:12.967958917 +0000 UTC m=+1.176394695" watchObservedRunningTime="2024-12-13 14:08:12.968180316 +0000 UTC m=+1.176616094" Dec 13 14:08:12.978637 kubelet[2539]: I1213 14:08:12.978581 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-fa37d69d59" podStartSLOduration=1.978566506 podStartE2EDuration="1.978566506s" podCreationTimestamp="2024-12-13 14:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:12.977932146 +0000 UTC m=+1.186367924" watchObservedRunningTime="2024-12-13 14:08:12.978566506 +0000 UTC m=+1.187002284" Dec 13 14:08:12.989973 kubelet[2539]: I1213 14:08:12.989915 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-fa37d69d59" podStartSLOduration=0.989900694 podStartE2EDuration="989.900694ms" podCreationTimestamp="2024-12-13 14:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:12.989675935 +0000 UTC m=+1.198111713" watchObservedRunningTime="2024-12-13 14:08:12.989900694 +0000 UTC m=+1.198336472" Dec 13 14:08:16.149587 sudo[1823]: pam_unix(sudo:session): session closed for user root Dec 13 14:08:16.229433 sshd[1820]: pam_unix(sshd:session): session closed for user core Dec 13 14:08:16.231854 systemd[1]: sshd@4-10.200.20.32:22-10.200.16.10:59784.service: Deactivated successfully. Dec 13 14:08:16.232571 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:08:16.232725 systemd[1]: session-7.scope: Consumed 8.891s CPU time. Dec 13 14:08:16.233141 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:08:16.234112 systemd-logind[1434]: Removed session 7. Dec 13 14:08:25.454408 kubelet[2539]: I1213 14:08:25.454372 2539 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:08:25.454870 env[1446]: time="2024-12-13T14:08:25.454821521Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:08:25.455095 kubelet[2539]: I1213 14:08:25.455059 2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:08:25.787237 kubelet[2539]: I1213 14:08:25.787093 2539 topology_manager.go:215] "Topology Admit Handler" podUID="ab7e5a73-a972-4fda-91ef-d8d476d431a5" podNamespace="kube-system" podName="kube-proxy-4rj4d" Dec 13 14:08:25.792322 systemd[1]: Created slice kubepods-besteffort-podab7e5a73_a972_4fda_91ef_d8d476d431a5.slice. Dec 13 14:08:25.798573 kubelet[2539]: W1213 14:08:25.798532 2539 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.6-a-fa37d69d59" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-fa37d69d59' and this object Dec 13 14:08:25.798694 kubelet[2539]: E1213 14:08:25.798582 2539 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.6-a-fa37d69d59" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-fa37d69d59' and this object Dec 13 14:08:25.809198 kubelet[2539]: I1213 14:08:25.809138 2539 topology_manager.go:215] "Topology Admit Handler" podUID="a74c2369-5725-4d5d-a162-08efba7c14c0" podNamespace="kube-system" podName="cilium-6ll6s" Dec 13 14:08:25.814308 systemd[1]: Created slice kubepods-burstable-poda74c2369_5725_4d5d_a162_08efba7c14c0.slice. Dec 13 14:08:25.868163 kubelet[2539]: I1213 14:08:25.868117 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-host-proc-sys-kernel\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868163 kubelet[2539]: I1213 14:08:25.868158 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-run\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868332 kubelet[2539]: I1213 14:08:25.868179 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-lib-modules\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868332 kubelet[2539]: I1213 14:08:25.868195 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a74c2369-5725-4d5d-a162-08efba7c14c0-clustermesh-secrets\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868332 kubelet[2539]: I1213 14:08:25.868211 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-bpf-maps\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868332 kubelet[2539]: I1213 14:08:25.868226 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-xtables-lock\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868332 kubelet[2539]: I1213 14:08:25.868242 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-host-proc-sys-net\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868332 kubelet[2539]: I1213 14:08:25.868257 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab7e5a73-a972-4fda-91ef-d8d476d431a5-xtables-lock\") pod \"kube-proxy-4rj4d\" (UID: \"ab7e5a73-a972-4fda-91ef-d8d476d431a5\") " pod="kube-system/kube-proxy-4rj4d" Dec 13 14:08:25.868468 kubelet[2539]: I1213 14:08:25.868272 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-etc-cni-netd\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868468 kubelet[2539]: I1213 14:08:25.868288 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-config-path\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868468 kubelet[2539]: I1213 14:08:25.868303 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m5f2\" (UniqueName: \"kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-kube-api-access-8m5f2\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868468 kubelet[2539]: I1213 14:08:25.868322 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-hubble-tls\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868468 kubelet[2539]: I1213 14:08:25.868346 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4l9\" (UniqueName: \"kubernetes.io/projected/ab7e5a73-a972-4fda-91ef-d8d476d431a5-kube-api-access-8h4l9\") pod \"kube-proxy-4rj4d\" (UID: \"ab7e5a73-a972-4fda-91ef-d8d476d431a5\") " pod="kube-system/kube-proxy-4rj4d" Dec 13 14:08:25.868575 kubelet[2539]: I1213 14:08:25.868362 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab7e5a73-a972-4fda-91ef-d8d476d431a5-lib-modules\") pod \"kube-proxy-4rj4d\" (UID: \"ab7e5a73-a972-4fda-91ef-d8d476d431a5\") " pod="kube-system/kube-proxy-4rj4d" Dec 13 14:08:25.868575 kubelet[2539]: I1213 14:08:25.868376 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-hostproc\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868575 kubelet[2539]: I1213 14:08:25.868393 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab7e5a73-a972-4fda-91ef-d8d476d431a5-kube-proxy\") pod \"kube-proxy-4rj4d\" (UID: \"ab7e5a73-a972-4fda-91ef-d8d476d431a5\") " pod="kube-system/kube-proxy-4rj4d" Dec 13 14:08:25.868575 kubelet[2539]: I1213 14:08:25.868409 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-cgroup\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.868575 kubelet[2539]: I1213 14:08:25.868424 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cni-path\") pod \"cilium-6ll6s\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " pod="kube-system/cilium-6ll6s" Dec 13 14:08:25.985348 kubelet[2539]: E1213 14:08:25.985320 2539 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 14:08:25.985510 kubelet[2539]: E1213 14:08:25.985498 2539 projected.go:200] Error preparing data for projected volume kube-api-access-8h4l9 for pod kube-system/kube-proxy-4rj4d: configmap "kube-root-ca.crt" not found Dec 13 14:08:25.985662 kubelet[2539]: E1213 14:08:25.985637 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab7e5a73-a972-4fda-91ef-d8d476d431a5-kube-api-access-8h4l9 podName:ab7e5a73-a972-4fda-91ef-d8d476d431a5 nodeName:}" failed. No retries permitted until 2024-12-13 14:08:26.48561688 +0000 UTC m=+14.694052658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8h4l9" (UniqueName: "kubernetes.io/projected/ab7e5a73-a972-4fda-91ef-d8d476d431a5-kube-api-access-8h4l9") pod "kube-proxy-4rj4d" (UID: "ab7e5a73-a972-4fda-91ef-d8d476d431a5") : configmap "kube-root-ca.crt" not found Dec 13 14:08:25.986074 kubelet[2539]: E1213 14:08:25.986047 2539 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 14:08:25.986174 kubelet[2539]: E1213 14:08:25.986163 2539 projected.go:200] Error preparing data for projected volume kube-api-access-8m5f2 for pod kube-system/cilium-6ll6s: configmap "kube-root-ca.crt" not found Dec 13 14:08:25.986279 kubelet[2539]: E1213 14:08:25.986269 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-kube-api-access-8m5f2 podName:a74c2369-5725-4d5d-a162-08efba7c14c0 nodeName:}" failed. No retries permitted until 2024-12-13 14:08:26.4862506 +0000 UTC m=+14.694686338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8m5f2" (UniqueName: "kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-kube-api-access-8m5f2") pod "cilium-6ll6s" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0") : configmap "kube-root-ca.crt" not found Dec 13 14:08:26.479629 kubelet[2539]: I1213 14:08:26.479572 2539 topology_manager.go:215] "Topology Admit Handler" podUID="f79bd496-764f-48c8-859e-4b276f114fa8" podNamespace="kube-system" podName="cilium-operator-599987898-gvh7s" Dec 13 14:08:26.486014 systemd[1]: Created slice kubepods-besteffort-podf79bd496_764f_48c8_859e_4b276f114fa8.slice. Dec 13 14:08:26.573424 kubelet[2539]: I1213 14:08:26.573385 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xcs\" (UniqueName: \"kubernetes.io/projected/f79bd496-764f-48c8-859e-4b276f114fa8-kube-api-access-q6xcs\") pod \"cilium-operator-599987898-gvh7s\" (UID: \"f79bd496-764f-48c8-859e-4b276f114fa8\") " pod="kube-system/cilium-operator-599987898-gvh7s" Dec 13 14:08:26.573576 kubelet[2539]: I1213 14:08:26.573444 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f79bd496-764f-48c8-859e-4b276f114fa8-cilium-config-path\") pod \"cilium-operator-599987898-gvh7s\" (UID: \"f79bd496-764f-48c8-859e-4b276f114fa8\") " pod="kube-system/cilium-operator-599987898-gvh7s" Dec 13 14:08:26.718334 env[1446]: time="2024-12-13T14:08:26.717934839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ll6s,Uid:a74c2369-5725-4d5d-a162-08efba7c14c0,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:26.754217 env[1446]: time="2024-12-13T14:08:26.753719449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:26.754368 env[1446]: time="2024-12-13T14:08:26.753788049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:26.754368 env[1446]: time="2024-12-13T14:08:26.753799089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:26.754459 env[1446]: time="2024-12-13T14:08:26.754410569Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa pid=2619 runtime=io.containerd.runc.v2 Dec 13 14:08:26.764518 systemd[1]: Started cri-containerd-d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa.scope. Dec 13 14:08:26.785310 env[1446]: time="2024-12-13T14:08:26.785255104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ll6s,Uid:a74c2369-5725-4d5d-a162-08efba7c14c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\"" Dec 13 14:08:26.787840 env[1446]: time="2024-12-13T14:08:26.787199222Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:08:26.789511 env[1446]: time="2024-12-13T14:08:26.789484340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gvh7s,Uid:f79bd496-764f-48c8-859e-4b276f114fa8,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:26.823112 env[1446]: time="2024-12-13T14:08:26.823038913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:26.823317 env[1446]: time="2024-12-13T14:08:26.823082913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:26.823317 env[1446]: time="2024-12-13T14:08:26.823095793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:26.823317 env[1446]: time="2024-12-13T14:08:26.823250712Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6 pid=2659 runtime=io.containerd.runc.v2 Dec 13 14:08:26.833593 systemd[1]: Started cri-containerd-21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6.scope. Dec 13 14:08:26.862716 env[1446]: time="2024-12-13T14:08:26.862666400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gvh7s,Uid:f79bd496-764f-48c8-859e-4b276f114fa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6\"" Dec 13 14:08:26.969279 kubelet[2539]: E1213 14:08:26.969249 2539 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:08:26.969410 kubelet[2539]: E1213 14:08:26.969324 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab7e5a73-a972-4fda-91ef-d8d476d431a5-kube-proxy podName:ab7e5a73-a972-4fda-91ef-d8d476d431a5 nodeName:}" failed. No retries permitted until 2024-12-13 14:08:27.469306513 +0000 UTC m=+15.677742291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ab7e5a73-a972-4fda-91ef-d8d476d431a5-kube-proxy") pod "kube-proxy-4rj4d" (UID: "ab7e5a73-a972-4fda-91ef-d8d476d431a5") : failed to sync configmap cache: timed out waiting for the condition Dec 13 14:08:27.599162 env[1446]: time="2024-12-13T14:08:27.599116442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4rj4d,Uid:ab7e5a73-a972-4fda-91ef-d8d476d431a5,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:27.643060 env[1446]: time="2024-12-13T14:08:27.642991006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:27.643191 env[1446]: time="2024-12-13T14:08:27.643065926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:27.643191 env[1446]: time="2024-12-13T14:08:27.643092406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:27.643336 env[1446]: time="2024-12-13T14:08:27.643305646Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/edcfb8a918398901f0954de2e1779bfecc1a5686e513554f753613abcb24bc89 pid=2699 runtime=io.containerd.runc.v2 Dec 13 14:08:27.663889 systemd[1]: Started cri-containerd-edcfb8a918398901f0954de2e1779bfecc1a5686e513554f753613abcb24bc89.scope. Dec 13 14:08:27.685486 env[1446]: time="2024-12-13T14:08:27.685437092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4rj4d,Uid:ab7e5a73-a972-4fda-91ef-d8d476d431a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"edcfb8a918398901f0954de2e1779bfecc1a5686e513554f753613abcb24bc89\"" Dec 13 14:08:27.689832 env[1446]: time="2024-12-13T14:08:27.689734048Z" level=info msg="CreateContainer within sandbox \"edcfb8a918398901f0954de2e1779bfecc1a5686e513554f753613abcb24bc89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:08:27.737514 env[1446]: time="2024-12-13T14:08:27.737464530Z" level=info msg="CreateContainer within sandbox \"edcfb8a918398901f0954de2e1779bfecc1a5686e513554f753613abcb24bc89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e0f1af564f85a1be845b0b97d71074329837915853dd00464356c3481e169502\"" Dec 13 14:08:27.739517 env[1446]: time="2024-12-13T14:08:27.739388128Z" level=info msg="StartContainer for \"e0f1af564f85a1be845b0b97d71074329837915853dd00464356c3481e169502\"" Dec 13 14:08:27.757009 systemd[1]: Started cri-containerd-e0f1af564f85a1be845b0b97d71074329837915853dd00464356c3481e169502.scope. Dec 13 14:08:27.797397 env[1446]: time="2024-12-13T14:08:27.797335041Z" level=info msg="StartContainer for \"e0f1af564f85a1be845b0b97d71074329837915853dd00464356c3481e169502\" returns successfully" Dec 13 14:08:27.987508 systemd[1]: run-containerd-runc-k8s.io-edcfb8a918398901f0954de2e1779bfecc1a5686e513554f753613abcb24bc89-runc.9BhHyc.mount: Deactivated successfully. Dec 13 14:08:31.570197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032751082.mount: Deactivated successfully. Dec 13 14:08:31.937802 kubelet[2539]: I1213 14:08:31.937453 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4rj4d" podStartSLOduration=6.93743594 podStartE2EDuration="6.93743594s" podCreationTimestamp="2024-12-13 14:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:27.991403844 +0000 UTC m=+16.199839622" watchObservedRunningTime="2024-12-13 14:08:31.93743594 +0000 UTC m=+20.145871718" Dec 13 14:08:34.112304 env[1446]: time="2024-12-13T14:08:34.112250009Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:34.119977 env[1446]: time="2024-12-13T14:08:34.119923964Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:34.125011 env[1446]: time="2024-12-13T14:08:34.124965920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:34.125677 env[1446]: time="2024-12-13T14:08:34.125648399Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:08:34.128342 env[1446]: time="2024-12-13T14:08:34.127617558Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:08:34.129342 env[1446]: time="2024-12-13T14:08:34.129301796Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:08:34.170415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883988905.mount: Deactivated successfully. Dec 13 14:08:34.175744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924301766.mount: Deactivated successfully. Dec 13 14:08:34.187664 env[1446]: time="2024-12-13T14:08:34.187621713Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\"" Dec 13 14:08:34.188995 env[1446]: time="2024-12-13T14:08:34.188956832Z" level=info msg="StartContainer for \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\"" Dec 13 14:08:34.208455 systemd[1]: Started cri-containerd-600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0.scope. Dec 13 14:08:34.240374 env[1446]: time="2024-12-13T14:08:34.240314354Z" level=info msg="StartContainer for \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\" returns successfully" Dec 13 14:08:34.244173 systemd[1]: cri-containerd-600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0.scope: Deactivated successfully. Dec 13 14:08:35.168055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0-rootfs.mount: Deactivated successfully. Dec 13 14:08:35.807428 env[1446]: time="2024-12-13T14:08:35.807380030Z" level=info msg="shim disconnected" id=600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0 Dec 13 14:08:35.807428 env[1446]: time="2024-12-13T14:08:35.807422070Z" level=warning msg="cleaning up after shim disconnected" id=600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0 namespace=k8s.io Dec 13 14:08:35.807861 env[1446]: time="2024-12-13T14:08:35.807443270Z" level=info msg="cleaning up dead shim" Dec 13 14:08:35.814204 env[1446]: time="2024-12-13T14:08:35.814157385Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2939 runtime=io.containerd.runc.v2\n" Dec 13 14:08:35.998218 env[1446]: time="2024-12-13T14:08:35.998170969Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:08:36.039101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994024332.mount: Deactivated successfully. Dec 13 14:08:36.047278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300632310.mount: Deactivated successfully. Dec 13 14:08:36.058872 env[1446]: time="2024-12-13T14:08:36.058756525Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\"" Dec 13 14:08:36.060162 env[1446]: time="2024-12-13T14:08:36.059628684Z" level=info msg="StartContainer for \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\"" Dec 13 14:08:36.073370 systemd[1]: Started cri-containerd-57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d.scope. Dec 13 14:08:36.104151 env[1446]: time="2024-12-13T14:08:36.104095331Z" level=info msg="StartContainer for \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\" returns successfully" Dec 13 14:08:36.111012 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:08:36.111201 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:08:36.111934 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:08:36.113645 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:08:36.119589 systemd[1]: cri-containerd-57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d.scope: Deactivated successfully. Dec 13 14:08:36.125153 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:08:36.369578 env[1446]: time="2024-12-13T14:08:36.369536217Z" level=info msg="shim disconnected" id=57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d Dec 13 14:08:36.369578 env[1446]: time="2024-12-13T14:08:36.369577737Z" level=warning msg="cleaning up after shim disconnected" id=57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d namespace=k8s.io Dec 13 14:08:36.369800 env[1446]: time="2024-12-13T14:08:36.369587657Z" level=info msg="cleaning up dead shim" Dec 13 14:08:36.376062 env[1446]: time="2024-12-13T14:08:36.376016453Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3001 runtime=io.containerd.runc.v2\n" Dec 13 14:08:36.995935 env[1446]: time="2024-12-13T14:08:36.995893999Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:08:37.023171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928657725.mount: Deactivated successfully. Dec 13 14:08:37.029135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246675923.mount: Deactivated successfully. Dec 13 14:08:37.042176 env[1446]: time="2024-12-13T14:08:37.042125446Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\"" Dec 13 14:08:37.044005 env[1446]: time="2024-12-13T14:08:37.042608206Z" level=info msg="StartContainer for \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\"" Dec 13 14:08:37.058398 systemd[1]: Started cri-containerd-91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9.scope. Dec 13 14:08:37.087505 systemd[1]: cri-containerd-91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9.scope: Deactivated successfully. Dec 13 14:08:37.089470 env[1446]: time="2024-12-13T14:08:37.089393092Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda74c2369_5725_4d5d_a162_08efba7c14c0.slice/cri-containerd-91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9.scope/memory.events\": no such file or directory" Dec 13 14:08:37.094497 env[1446]: time="2024-12-13T14:08:37.094421208Z" level=info msg="StartContainer for \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\" returns successfully" Dec 13 14:08:37.126965 env[1446]: time="2024-12-13T14:08:37.126916905Z" level=info msg="shim disconnected" id=91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9 Dec 13 14:08:37.126965 env[1446]: time="2024-12-13T14:08:37.126963465Z" level=warning msg="cleaning up after shim disconnected" id=91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9 namespace=k8s.io Dec 13 14:08:37.127231 env[1446]: time="2024-12-13T14:08:37.126972825Z" level=info msg="cleaning up dead shim" Dec 13 14:08:37.134010 env[1446]: time="2024-12-13T14:08:37.133967179Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3061 runtime=io.containerd.runc.v2\n" Dec 13 14:08:37.997983 env[1446]: time="2024-12-13T14:08:37.997845794Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:08:38.024880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556336948.mount: Deactivated successfully. Dec 13 14:08:38.036960 env[1446]: time="2024-12-13T14:08:38.036900726Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\"" Dec 13 14:08:38.037450 env[1446]: time="2024-12-13T14:08:38.037427926Z" level=info msg="StartContainer for \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\"" Dec 13 14:08:38.052620 systemd[1]: Started cri-containerd-164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534.scope. Dec 13 14:08:38.078904 systemd[1]: cri-containerd-164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534.scope: Deactivated successfully. Dec 13 14:08:38.081563 env[1446]: time="2024-12-13T14:08:38.081381335Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda74c2369_5725_4d5d_a162_08efba7c14c0.slice/cri-containerd-164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534.scope/memory.events\": no such file or directory" Dec 13 14:08:38.084994 env[1446]: time="2024-12-13T14:08:38.084947932Z" level=info msg="StartContainer for \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\" returns successfully" Dec 13 14:08:38.109525 env[1446]: time="2024-12-13T14:08:38.109470634Z" level=info msg="shim disconnected" id=164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534 Dec 13 14:08:38.109525 env[1446]: time="2024-12-13T14:08:38.109522314Z" level=warning msg="cleaning up after shim disconnected" id=164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534 namespace=k8s.io Dec 13 14:08:38.109780 env[1446]: time="2024-12-13T14:08:38.109533434Z" level=info msg="cleaning up dead shim" Dec 13 14:08:38.116821 env[1446]: time="2024-12-13T14:08:38.116749989Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3117 runtime=io.containerd.runc.v2\n" Dec 13 14:08:39.003347 env[1446]: time="2024-12-13T14:08:39.002924675Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:08:39.029972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176257403.mount: Deactivated successfully. Dec 13 14:08:39.045904 env[1446]: time="2024-12-13T14:08:39.045856724Z" level=info msg="CreateContainer within sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\"" Dec 13 14:08:39.046937 env[1446]: time="2024-12-13T14:08:39.046908563Z" level=info msg="StartContainer for \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\"" Dec 13 14:08:39.061555 systemd[1]: Started cri-containerd-c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116.scope. Dec 13 14:08:39.097530 env[1446]: time="2024-12-13T14:08:39.097392488Z" level=info msg="StartContainer for \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\" returns successfully" Dec 13 14:08:39.197794 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:08:39.217824 kubelet[2539]: I1213 14:08:39.216897 2539 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:08:39.250600 kubelet[2539]: I1213 14:08:39.250562 2539 topology_manager.go:215] "Topology Admit Handler" podUID="b70779a4-2f51-42cf-918a-b7d88435a20c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qcm9p" Dec 13 14:08:39.256063 systemd[1]: Created slice kubepods-burstable-podb70779a4_2f51_42cf_918a_b7d88435a20c.slice. Dec 13 14:08:39.257955 kubelet[2539]: I1213 14:08:39.257928 2539 topology_manager.go:215] "Topology Admit Handler" podUID="6ce26fef-32e8-4058-a9b0-65a2482d2795" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pbzmv" Dec 13 14:08:39.263906 systemd[1]: Created slice kubepods-burstable-pod6ce26fef_32e8_4058_a9b0_65a2482d2795.slice. Dec 13 14:08:39.442332 kubelet[2539]: I1213 14:08:39.442301 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxv5\" (UniqueName: \"kubernetes.io/projected/b70779a4-2f51-42cf-918a-b7d88435a20c-kube-api-access-bsxv5\") pod \"coredns-7db6d8ff4d-qcm9p\" (UID: \"b70779a4-2f51-42cf-918a-b7d88435a20c\") " pod="kube-system/coredns-7db6d8ff4d-qcm9p" Dec 13 14:08:39.442547 kubelet[2539]: I1213 14:08:39.442531 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b70779a4-2f51-42cf-918a-b7d88435a20c-config-volume\") pod \"coredns-7db6d8ff4d-qcm9p\" (UID: \"b70779a4-2f51-42cf-918a-b7d88435a20c\") " pod="kube-system/coredns-7db6d8ff4d-qcm9p" Dec 13 14:08:39.442650 kubelet[2539]: I1213 14:08:39.442637 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thsjj\" (UniqueName: \"kubernetes.io/projected/6ce26fef-32e8-4058-a9b0-65a2482d2795-kube-api-access-thsjj\") pod \"coredns-7db6d8ff4d-pbzmv\" (UID: \"6ce26fef-32e8-4058-a9b0-65a2482d2795\") " pod="kube-system/coredns-7db6d8ff4d-pbzmv" Dec 13 14:08:39.442746 kubelet[2539]: I1213 14:08:39.442731 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce26fef-32e8-4058-a9b0-65a2482d2795-config-volume\") pod \"coredns-7db6d8ff4d-pbzmv\" (UID: \"6ce26fef-32e8-4058-a9b0-65a2482d2795\") " pod="kube-system/coredns-7db6d8ff4d-pbzmv" Dec 13 14:08:39.449038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105606078.mount: Deactivated successfully. Dec 13 14:08:39.568308 env[1446]: time="2024-12-13T14:08:39.567754834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pbzmv,Uid:6ce26fef-32e8-4058-a9b0-65a2482d2795,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:39.589785 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:08:39.863073 env[1446]: time="2024-12-13T14:08:39.862966025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qcm9p,Uid:b70779a4-2f51-42cf-918a-b7d88435a20c,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:40.476990 env[1446]: time="2024-12-13T14:08:40.476946152Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:40.482598 env[1446]: time="2024-12-13T14:08:40.482564628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:40.485391 env[1446]: time="2024-12-13T14:08:40.485350747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:40.486002 env[1446]: time="2024-12-13T14:08:40.485973066Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:08:40.489817 env[1446]: time="2024-12-13T14:08:40.489775063Z" level=info msg="CreateContainer within sandbox \"21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:08:40.514953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097550856.mount: Deactivated successfully. Dec 13 14:08:40.520438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542881984.mount: Deactivated successfully. Dec 13 14:08:40.527709 env[1446]: time="2024-12-13T14:08:40.527642917Z" level=info msg="CreateContainer within sandbox \"21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\"" Dec 13 14:08:40.529810 env[1446]: time="2024-12-13T14:08:40.529766555Z" level=info msg="StartContainer for \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\"" Dec 13 14:08:40.543960 systemd[1]: Started cri-containerd-ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f.scope. Dec 13 14:08:40.579226 env[1446]: time="2024-12-13T14:08:40.579176881Z" level=info msg="StartContainer for \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\" returns successfully" Dec 13 14:08:41.031970 kubelet[2539]: I1213 14:08:41.031908 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6ll6s" podStartSLOduration=8.692045067 podStartE2EDuration="16.031891483s" podCreationTimestamp="2024-12-13 14:08:25 +0000 UTC" firstStartedPulling="2024-12-13 14:08:26.786798702 +0000 UTC m=+14.995234480" lastFinishedPulling="2024-12-13 14:08:34.126645158 +0000 UTC m=+22.335080896" observedRunningTime="2024-12-13 14:08:40.020650313 +0000 UTC m=+28.229086091" watchObservedRunningTime="2024-12-13 14:08:41.031891483 +0000 UTC m=+29.240327261" Dec 13 14:08:44.222404 systemd-networkd[1614]: cilium_host: Link UP Dec 13 14:08:44.222517 systemd-networkd[1614]: cilium_net: Link UP Dec 13 14:08:44.222520 systemd-networkd[1614]: cilium_net: Gained carrier Dec 13 14:08:44.222631 systemd-networkd[1614]: cilium_host: Gained carrier Dec 13 14:08:44.223868 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:08:44.224743 systemd-networkd[1614]: cilium_host: Gained IPv6LL Dec 13 14:08:44.262888 systemd-networkd[1614]: cilium_net: Gained IPv6LL Dec 13 14:08:44.318782 systemd-networkd[1614]: cilium_vxlan: Link UP Dec 13 14:08:44.318789 systemd-networkd[1614]: cilium_vxlan: Gained carrier Dec 13 14:08:44.537817 kernel: NET: Registered PF_ALG protocol family Dec 13 14:08:45.150441 systemd-networkd[1614]: lxc_health: Link UP Dec 13 14:08:45.171801 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:08:45.172264 systemd-networkd[1614]: lxc_health: Gained carrier Dec 13 14:08:45.412004 systemd-networkd[1614]: lxcd7aeb356aae8: Link UP Dec 13 14:08:45.420855 kernel: eth0: renamed from tmpb25b3 Dec 13 14:08:45.436798 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:08:45.436904 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd7aeb356aae8: link becomes ready Dec 13 14:08:45.441116 systemd-networkd[1614]: lxcd7aeb356aae8: Gained carrier Dec 13 14:08:45.630978 systemd-networkd[1614]: lxcab2a843dd0a4: Link UP Dec 13 14:08:45.647813 kernel: eth0: renamed from tmpd2ffd Dec 13 14:08:45.655238 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcab2a843dd0a4: link becomes ready Dec 13 14:08:45.655425 systemd-networkd[1614]: lxcab2a843dd0a4: Gained carrier Dec 13 14:08:45.965925 systemd-networkd[1614]: cilium_vxlan: Gained IPv6LL Dec 13 14:08:46.732936 systemd-networkd[1614]: lxcd7aeb356aae8: Gained IPv6LL Dec 13 14:08:46.745618 kubelet[2539]: I1213 14:08:46.745545 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gvh7s" podStartSLOduration=7.122416286 podStartE2EDuration="20.745527872s" podCreationTimestamp="2024-12-13 14:08:26 +0000 UTC" firstStartedPulling="2024-12-13 14:08:26.864077239 +0000 UTC m=+15.072513017" lastFinishedPulling="2024-12-13 14:08:40.487188825 +0000 UTC m=+28.695624603" observedRunningTime="2024-12-13 14:08:41.033268202 +0000 UTC m=+29.241703980" watchObservedRunningTime="2024-12-13 14:08:46.745527872 +0000 UTC m=+34.953963690" Dec 13 14:08:46.796940 systemd-networkd[1614]: lxc_health: Gained IPv6LL Dec 13 14:08:46.988926 systemd-networkd[1614]: lxcab2a843dd0a4: Gained IPv6LL Dec 13 14:08:49.058127 env[1446]: time="2024-12-13T14:08:49.058054230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:49.058557 env[1446]: time="2024-12-13T14:08:49.058526829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:49.058656 env[1446]: time="2024-12-13T14:08:49.058635749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:49.058909 env[1446]: time="2024-12-13T14:08:49.058880829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b25b309db5e0355eec5592726b8bb24555577fdfa791cd7da94bc82e16cc1289 pid=3709 runtime=io.containerd.runc.v2 Dec 13 14:08:49.073401 systemd[1]: Started cri-containerd-b25b309db5e0355eec5592726b8bb24555577fdfa791cd7da94bc82e16cc1289.scope. Dec 13 14:08:49.078077 env[1446]: time="2024-12-13T14:08:49.078011617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:49.078174 env[1446]: time="2024-12-13T14:08:49.078098177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:49.078174 env[1446]: time="2024-12-13T14:08:49.078129897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:49.078316 env[1446]: time="2024-12-13T14:08:49.078274336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2ffd2aff40130a7994f587462249bc829d5078a7c274dea9d317f5c8b7da00c pid=3722 runtime=io.containerd.runc.v2 Dec 13 14:08:49.096627 systemd[1]: Started cri-containerd-d2ffd2aff40130a7994f587462249bc829d5078a7c274dea9d317f5c8b7da00c.scope. Dec 13 14:08:49.150723 env[1446]: time="2024-12-13T14:08:49.150671409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qcm9p,Uid:b70779a4-2f51-42cf-918a-b7d88435a20c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b25b309db5e0355eec5592726b8bb24555577fdfa791cd7da94bc82e16cc1289\"" Dec 13 14:08:49.153533 env[1446]: time="2024-12-13T14:08:49.153459248Z" level=info msg="CreateContainer within sandbox \"b25b309db5e0355eec5592726b8bb24555577fdfa791cd7da94bc82e16cc1289\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:08:49.170432 env[1446]: time="2024-12-13T14:08:49.170393677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pbzmv,Uid:6ce26fef-32e8-4058-a9b0-65a2482d2795,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2ffd2aff40130a7994f587462249bc829d5078a7c274dea9d317f5c8b7da00c\"" Dec 13 14:08:49.173247 env[1446]: time="2024-12-13T14:08:49.173214755Z" level=info msg="CreateContainer within sandbox \"d2ffd2aff40130a7994f587462249bc829d5078a7c274dea9d317f5c8b7da00c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:08:49.197810 env[1446]: time="2024-12-13T14:08:49.197742619Z" level=info msg="CreateContainer within sandbox \"b25b309db5e0355eec5592726b8bb24555577fdfa791cd7da94bc82e16cc1289\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"afa34b73678ba19df1ef6f5926d8a220d48defab3dc6ebcd41bb965dcc0684ee\"" Dec 13 14:08:49.199179 env[1446]: time="2024-12-13T14:08:49.199136418Z" level=info msg="StartContainer for \"afa34b73678ba19df1ef6f5926d8a220d48defab3dc6ebcd41bb965dcc0684ee\"" Dec 13 14:08:49.223797 systemd[1]: Started cri-containerd-afa34b73678ba19df1ef6f5926d8a220d48defab3dc6ebcd41bb965dcc0684ee.scope. Dec 13 14:08:49.226391 env[1446]: time="2024-12-13T14:08:49.226349880Z" level=info msg="CreateContainer within sandbox \"d2ffd2aff40130a7994f587462249bc829d5078a7c274dea9d317f5c8b7da00c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f89f51956a0a14e67b9a5abcd09af72bb55db4c88d2787195678101b7ae48b3\"" Dec 13 14:08:49.226943 env[1446]: time="2024-12-13T14:08:49.226820320Z" level=info msg="StartContainer for \"5f89f51956a0a14e67b9a5abcd09af72bb55db4c88d2787195678101b7ae48b3\"" Dec 13 14:08:49.258062 systemd[1]: Started cri-containerd-5f89f51956a0a14e67b9a5abcd09af72bb55db4c88d2787195678101b7ae48b3.scope. Dec 13 14:08:49.287541 env[1446]: time="2024-12-13T14:08:49.287442721Z" level=info msg="StartContainer for \"afa34b73678ba19df1ef6f5926d8a220d48defab3dc6ebcd41bb965dcc0684ee\" returns successfully" Dec 13 14:08:49.317572 env[1446]: time="2024-12-13T14:08:49.317474901Z" level=info msg="StartContainer for \"5f89f51956a0a14e67b9a5abcd09af72bb55db4c88d2787195678101b7ae48b3\" returns successfully" Dec 13 14:08:50.038034 kubelet[2539]: I1213 14:08:50.037972 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pbzmv" podStartSLOduration=24.037955113 podStartE2EDuration="24.037955113s" podCreationTimestamp="2024-12-13 14:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:50.037953833 +0000 UTC m=+38.246389611" watchObservedRunningTime="2024-12-13 14:08:50.037955113 +0000 UTC m=+38.246390851" Dec 13 14:08:50.052669 kubelet[2539]: I1213 14:08:50.052607 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qcm9p" podStartSLOduration=24.052588904 podStartE2EDuration="24.052588904s" podCreationTimestamp="2024-12-13 14:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:50.051310824 +0000 UTC m=+38.259746602" watchObservedRunningTime="2024-12-13 14:08:50.052588904 +0000 UTC m=+38.261024682" Dec 13 14:08:50.063296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113114043.mount: Deactivated successfully. Dec 13 14:10:37.725977 systemd[1]: Started sshd@5-10.200.20.32:22-10.200.16.10:59760.service. Dec 13 14:10:38.141517 sshd[3887]: Accepted publickey for core from 10.200.16.10 port 59760 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:38.143246 sshd[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:38.147642 systemd[1]: Started session-8.scope. Dec 13 14:10:38.147963 systemd-logind[1434]: New session 8 of user core. Dec 13 14:10:38.526012 sshd[3887]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:38.529031 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:10:38.529615 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:10:38.529728 systemd[1]: sshd@5-10.200.20.32:22-10.200.16.10:59760.service: Deactivated successfully. Dec 13 14:10:38.531017 systemd-logind[1434]: Removed session 8. Dec 13 14:10:43.598066 systemd[1]: Started sshd@6-10.200.20.32:22-10.200.16.10:56744.service. Dec 13 14:10:44.009513 sshd[3899]: Accepted publickey for core from 10.200.16.10 port 56744 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:44.010476 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:44.015167 systemd[1]: Started session-9.scope. Dec 13 14:10:44.015672 systemd-logind[1434]: New session 9 of user core. Dec 13 14:10:44.386936 sshd[3899]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:44.389392 systemd[1]: sshd@6-10.200.20.32:22-10.200.16.10:56744.service: Deactivated successfully. Dec 13 14:10:44.390159 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:10:44.390702 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:10:44.391523 systemd-logind[1434]: Removed session 9. Dec 13 14:10:49.455496 systemd[1]: Started sshd@7-10.200.20.32:22-10.200.16.10:53638.service. Dec 13 14:10:49.857224 sshd[3912]: Accepted publickey for core from 10.200.16.10 port 53638 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:49.858872 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:49.863271 systemd[1]: Started session-10.scope. Dec 13 14:10:49.863605 systemd-logind[1434]: New session 10 of user core. Dec 13 14:10:50.234087 sshd[3912]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:50.236455 systemd[1]: sshd@7-10.200.20.32:22-10.200.16.10:53638.service: Deactivated successfully. Dec 13 14:10:50.237200 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:10:50.237788 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:10:50.238642 systemd-logind[1434]: Removed session 10. Dec 13 14:10:55.305396 systemd[1]: Started sshd@8-10.200.20.32:22-10.200.16.10:53640.service. Dec 13 14:10:55.726445 sshd[3926]: Accepted publickey for core from 10.200.16.10 port 53640 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:55.728124 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:55.732400 systemd[1]: Started session-11.scope. Dec 13 14:10:55.733639 systemd-logind[1434]: New session 11 of user core. Dec 13 14:10:56.107279 sshd[3926]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:56.109946 systemd[1]: sshd@8-10.200.20.32:22-10.200.16.10:53640.service: Deactivated successfully. Dec 13 14:10:56.110663 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:10:56.111341 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:10:56.112089 systemd-logind[1434]: Removed session 11. Dec 13 14:10:56.174776 systemd[1]: Started sshd@9-10.200.20.32:22-10.200.16.10:53642.service. Dec 13 14:10:56.576537 sshd[3939]: Accepted publickey for core from 10.200.16.10 port 53642 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:56.578225 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:56.582623 systemd[1]: Started session-12.scope. Dec 13 14:10:56.583992 systemd-logind[1434]: New session 12 of user core. Dec 13 14:10:56.996336 sshd[3939]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:56.999267 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:10:56.999271 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:10:56.999889 systemd[1]: sshd@9-10.200.20.32:22-10.200.16.10:53642.service: Deactivated successfully. Dec 13 14:10:57.000985 systemd-logind[1434]: Removed session 12. Dec 13 14:10:57.067122 systemd[1]: Started sshd@10-10.200.20.32:22-10.200.16.10:53658.service. Dec 13 14:10:57.490075 sshd[3949]: Accepted publickey for core from 10.200.16.10 port 53658 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:57.491778 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:57.495818 systemd-logind[1434]: New session 13 of user core. Dec 13 14:10:57.496316 systemd[1]: Started session-13.scope. Dec 13 14:10:57.872690 sshd[3949]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:57.875372 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:10:57.876105 systemd[1]: sshd@10-10.200.20.32:22-10.200.16.10:53658.service: Deactivated successfully. Dec 13 14:10:57.876795 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:10:57.877990 systemd-logind[1434]: Removed session 13. Dec 13 14:11:02.942665 systemd[1]: Started sshd@11-10.200.20.32:22-10.200.16.10:43538.service. Dec 13 14:11:03.353626 sshd[3964]: Accepted publickey for core from 10.200.16.10 port 43538 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:03.354993 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:03.360152 systemd[1]: Started session-14.scope. Dec 13 14:11:03.361431 systemd-logind[1434]: New session 14 of user core. Dec 13 14:11:03.724712 sshd[3964]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:03.727514 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:11:03.728820 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:11:03.729492 systemd[1]: sshd@11-10.200.20.32:22-10.200.16.10:43538.service: Deactivated successfully. Dec 13 14:11:03.730589 systemd-logind[1434]: Removed session 14. Dec 13 14:11:08.796798 systemd[1]: Started sshd@12-10.200.20.32:22-10.200.16.10:45828.service. Dec 13 14:11:09.211151 sshd[3979]: Accepted publickey for core from 10.200.16.10 port 45828 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:09.212788 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:09.217519 systemd[1]: Started session-15.scope. Dec 13 14:11:09.218836 systemd-logind[1434]: New session 15 of user core. Dec 13 14:11:09.591240 sshd[3979]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:09.594199 systemd[1]: sshd@12-10.200.20.32:22-10.200.16.10:45828.service: Deactivated successfully. Dec 13 14:11:09.594991 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:11:09.595690 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:11:09.596484 systemd-logind[1434]: Removed session 15. Dec 13 14:11:09.660845 systemd[1]: Started sshd@13-10.200.20.32:22-10.200.16.10:45834.service. Dec 13 14:11:10.072734 sshd[3991]: Accepted publickey for core from 10.200.16.10 port 45834 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:10.074993 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:10.079413 systemd[1]: Started session-16.scope. Dec 13 14:11:10.080212 systemd-logind[1434]: New session 16 of user core. Dec 13 14:11:10.474435 sshd[3991]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:10.477049 systemd[1]: sshd@13-10.200.20.32:22-10.200.16.10:45834.service: Deactivated successfully. Dec 13 14:11:10.477839 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:11:10.478419 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:11:10.479464 systemd-logind[1434]: Removed session 16. Dec 13 14:11:10.544677 systemd[1]: Started sshd@14-10.200.20.32:22-10.200.16.10:45850.service. Dec 13 14:11:10.954089 sshd[4000]: Accepted publickey for core from 10.200.16.10 port 45850 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:10.955734 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:10.960356 systemd[1]: Started session-17.scope. Dec 13 14:11:10.961025 systemd-logind[1434]: New session 17 of user core. Dec 13 14:11:12.715722 sshd[4000]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:12.718687 systemd[1]: sshd@14-10.200.20.32:22-10.200.16.10:45850.service: Deactivated successfully. Dec 13 14:11:12.719431 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:11:12.720137 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:11:12.721310 systemd-logind[1434]: Removed session 17. Dec 13 14:11:12.794798 systemd[1]: Started sshd@15-10.200.20.32:22-10.200.16.10:45852.service. Dec 13 14:11:13.217232 sshd[4020]: Accepted publickey for core from 10.200.16.10 port 45852 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:13.219035 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:13.223725 systemd[1]: Started session-18.scope. Dec 13 14:11:13.224232 systemd-logind[1434]: New session 18 of user core. Dec 13 14:11:13.710159 sshd[4020]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:13.712650 systemd[1]: sshd@15-10.200.20.32:22-10.200.16.10:45852.service: Deactivated successfully. Dec 13 14:11:13.713528 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:11:13.714090 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:11:13.714970 systemd-logind[1434]: Removed session 18. Dec 13 14:11:13.780292 systemd[1]: Started sshd@16-10.200.20.32:22-10.200.16.10:45864.service. Dec 13 14:11:14.183205 sshd[4030]: Accepted publickey for core from 10.200.16.10 port 45864 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:14.184672 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:14.188644 systemd-logind[1434]: New session 19 of user core. Dec 13 14:11:14.189165 systemd[1]: Started session-19.scope. Dec 13 14:11:14.560571 sshd[4030]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:14.563410 systemd-logind[1434]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:11:14.563580 systemd[1]: sshd@16-10.200.20.32:22-10.200.16.10:45864.service: Deactivated successfully. Dec 13 14:11:14.564292 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:11:14.565014 systemd-logind[1434]: Removed session 19. Dec 13 14:11:19.629249 systemd[1]: Started sshd@17-10.200.20.32:22-10.200.16.10:39664.service. Dec 13 14:11:20.031275 sshd[4045]: Accepted publickey for core from 10.200.16.10 port 39664 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:20.033032 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:20.037342 systemd[1]: Started session-20.scope. Dec 13 14:11:20.037654 systemd-logind[1434]: New session 20 of user core. Dec 13 14:11:20.407617 sshd[4045]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:20.410519 systemd[1]: sshd@17-10.200.20.32:22-10.200.16.10:39664.service: Deactivated successfully. Dec 13 14:11:20.411235 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:11:20.411669 systemd-logind[1434]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:11:20.412351 systemd-logind[1434]: Removed session 20. Dec 13 14:11:25.476893 systemd[1]: Started sshd@18-10.200.20.32:22-10.200.16.10:39674.service. Dec 13 14:11:25.886919 sshd[4057]: Accepted publickey for core from 10.200.16.10 port 39674 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:25.888581 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:25.892831 systemd-logind[1434]: New session 21 of user core. Dec 13 14:11:25.892953 systemd[1]: Started session-21.scope. Dec 13 14:11:26.266480 sshd[4057]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:26.269418 systemd-logind[1434]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:11:26.269591 systemd[1]: sshd@18-10.200.20.32:22-10.200.16.10:39674.service: Deactivated successfully. Dec 13 14:11:26.270331 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:11:26.271065 systemd-logind[1434]: Removed session 21. Dec 13 14:11:31.337889 systemd[1]: Started sshd@19-10.200.20.32:22-10.200.16.10:56488.service. Dec 13 14:11:31.761542 sshd[4071]: Accepted publickey for core from 10.200.16.10 port 56488 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:31.763301 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:31.767609 systemd-logind[1434]: New session 22 of user core. Dec 13 14:11:31.768159 systemd[1]: Started session-22.scope. Dec 13 14:11:32.142736 sshd[4071]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:32.145378 systemd[1]: sshd@19-10.200.20.32:22-10.200.16.10:56488.service: Deactivated successfully. Dec 13 14:11:32.146121 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:11:32.146680 systemd-logind[1434]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:11:32.147444 systemd-logind[1434]: Removed session 22. Dec 13 14:11:32.211798 systemd[1]: Started sshd@20-10.200.20.32:22-10.200.16.10:56500.service. Dec 13 14:11:32.614354 sshd[4083]: Accepted publickey for core from 10.200.16.10 port 56500 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:32.616431 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:32.620896 systemd[1]: Started session-23.scope. Dec 13 14:11:32.621548 systemd-logind[1434]: New session 23 of user core. Dec 13 14:11:34.593529 systemd[1]: run-containerd-runc-k8s.io-c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116-runc.VknFJ2.mount: Deactivated successfully. Dec 13 14:11:34.601233 env[1446]: time="2024-12-13T14:11:34.601183396Z" level=info msg="StopContainer for \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\" with timeout 30 (s)" Dec 13 14:11:34.601936 env[1446]: time="2024-12-13T14:11:34.601907594Z" level=info msg="Stop container \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\" with signal terminated" Dec 13 14:11:34.618987 env[1446]: time="2024-12-13T14:11:34.618917348Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:11:34.619743 systemd[1]: cri-containerd-ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f.scope: Deactivated successfully. Dec 13 14:11:34.626105 env[1446]: time="2024-12-13T14:11:34.626064689Z" level=info msg="StopContainer for \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\" with timeout 2 (s)" Dec 13 14:11:34.626535 env[1446]: time="2024-12-13T14:11:34.626494328Z" level=info msg="Stop container \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\" with signal terminated" Dec 13 14:11:34.631850 systemd-networkd[1614]: lxc_health: Link DOWN Dec 13 14:11:34.631856 systemd-networkd[1614]: lxc_health: Lost carrier Dec 13 14:11:34.643482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f-rootfs.mount: Deactivated successfully. Dec 13 14:11:34.657559 systemd[1]: cri-containerd-c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116.scope: Deactivated successfully. Dec 13 14:11:34.657925 systemd[1]: cri-containerd-c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116.scope: Consumed 6.293s CPU time. Dec 13 14:11:34.679992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116-rootfs.mount: Deactivated successfully. Dec 13 14:11:34.717925 env[1446]: time="2024-12-13T14:11:34.717853481Z" level=info msg="shim disconnected" id=c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116 Dec 13 14:11:34.718304 env[1446]: time="2024-12-13T14:11:34.718281640Z" level=warning msg="cleaning up after shim disconnected" id=c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116 namespace=k8s.io Dec 13 14:11:34.718400 env[1446]: time="2024-12-13T14:11:34.718386719Z" level=info msg="cleaning up dead shim" Dec 13 14:11:34.718688 env[1446]: time="2024-12-13T14:11:34.718167280Z" level=info msg="shim disconnected" id=ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f Dec 13 14:11:34.718796 env[1446]: time="2024-12-13T14:11:34.718778798Z" level=warning msg="cleaning up after shim disconnected" id=ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f namespace=k8s.io Dec 13 14:11:34.718881 env[1446]: time="2024-12-13T14:11:34.718849958Z" level=info msg="cleaning up dead shim" Dec 13 14:11:34.731193 env[1446]: time="2024-12-13T14:11:34.730893605Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4156 runtime=io.containerd.runc.v2\n" Dec 13 14:11:34.732422 env[1446]: time="2024-12-13T14:11:34.732394481Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4157 runtime=io.containerd.runc.v2\n" Dec 13 14:11:34.736938 env[1446]: time="2024-12-13T14:11:34.736886229Z" level=info msg="StopContainer for \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\" returns successfully" Dec 13 14:11:34.737569 env[1446]: time="2024-12-13T14:11:34.737541308Z" level=info msg="StopPodSandbox for \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\"" Dec 13 14:11:34.737639 env[1446]: time="2024-12-13T14:11:34.737599027Z" level=info msg="Container to stop \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:34.737639 env[1446]: time="2024-12-13T14:11:34.737613547Z" level=info msg="Container to stop \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:34.737639 env[1446]: time="2024-12-13T14:11:34.737626787Z" level=info msg="Container to stop \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:34.739503 env[1446]: time="2024-12-13T14:11:34.737638107Z" level=info msg="Container to stop \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:34.739503 env[1446]: time="2024-12-13T14:11:34.737648027Z" level=info msg="Container to stop \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:34.739453 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa-shm.mount: Deactivated successfully. Dec 13 14:11:34.739884 env[1446]: time="2024-12-13T14:11:34.739855901Z" level=info msg="StopContainer for \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\" returns successfully" Dec 13 14:11:34.740530 env[1446]: time="2024-12-13T14:11:34.740507220Z" level=info msg="StopPodSandbox for \"21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6\"" Dec 13 14:11:34.740797 env[1446]: time="2024-12-13T14:11:34.740775419Z" level=info msg="Container to stop \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:34.745601 systemd[1]: cri-containerd-d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa.scope: Deactivated successfully. Dec 13 14:11:34.747680 systemd[1]: cri-containerd-21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6.scope: Deactivated successfully. Dec 13 14:11:34.781498 env[1446]: time="2024-12-13T14:11:34.781450709Z" level=info msg="shim disconnected" id=21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6 Dec 13 14:11:34.781498 env[1446]: time="2024-12-13T14:11:34.781490629Z" level=warning msg="cleaning up after shim disconnected" id=21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6 namespace=k8s.io Dec 13 14:11:34.781498 env[1446]: time="2024-12-13T14:11:34.781498909Z" level=info msg="cleaning up dead shim" Dec 13 14:11:34.781898 env[1446]: time="2024-12-13T14:11:34.781857828Z" level=info msg="shim disconnected" id=d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa Dec 13 14:11:34.781898 env[1446]: time="2024-12-13T14:11:34.781896108Z" level=warning msg="cleaning up after shim disconnected" id=d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa namespace=k8s.io Dec 13 14:11:34.781981 env[1446]: time="2024-12-13T14:11:34.781904628Z" level=info msg="cleaning up dead shim" Dec 13 14:11:34.791669 env[1446]: time="2024-12-13T14:11:34.791622801Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4222 runtime=io.containerd.runc.v2\n" Dec 13 14:11:34.792099 env[1446]: time="2024-12-13T14:11:34.792074000Z" level=info msg="TearDown network for sandbox \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" successfully" Dec 13 14:11:34.792228 env[1446]: time="2024-12-13T14:11:34.792208880Z" level=info msg="StopPodSandbox for \"d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa\" returns successfully" Dec 13 14:11:34.792387 env[1446]: time="2024-12-13T14:11:34.792217400Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4221 runtime=io.containerd.runc.v2\n" Dec 13 14:11:34.792865 env[1446]: time="2024-12-13T14:11:34.792608239Z" level=info msg="TearDown network for sandbox \"21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6\" successfully" Dec 13 14:11:34.792865 env[1446]: time="2024-12-13T14:11:34.792669199Z" level=info msg="StopPodSandbox for \"21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6\" returns successfully" Dec 13 14:11:34.956672 kubelet[2539]: I1213 14:11:34.956634 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-hubble-tls\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957034 kubelet[2539]: I1213 14:11:34.956688 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-lib-modules\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957034 kubelet[2539]: I1213 14:11:34.956723 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.957034 kubelet[2539]: I1213 14:11:34.956787 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-etc-cni-netd\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957034 kubelet[2539]: I1213 14:11:34.956811 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6xcs\" (UniqueName: \"kubernetes.io/projected/f79bd496-764f-48c8-859e-4b276f114fa8-kube-api-access-q6xcs\") pod \"f79bd496-764f-48c8-859e-4b276f114fa8\" (UID: \"f79bd496-764f-48c8-859e-4b276f114fa8\") " Dec 13 14:11:34.957034 kubelet[2539]: I1213 14:11:34.956851 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.957181 kubelet[2539]: I1213 14:11:34.956832 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-host-proc-sys-kernel\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957181 kubelet[2539]: I1213 14:11:34.957145 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cni-path\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957181 kubelet[2539]: I1213 14:11:34.957160 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-run\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957181 kubelet[2539]: I1213 14:11:34.957180 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a74c2369-5725-4d5d-a162-08efba7c14c0-clustermesh-secrets\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957270 kubelet[2539]: I1213 14:11:34.957206 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-xtables-lock\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957270 kubelet[2539]: I1213 14:11:34.957220 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-hostproc\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957270 kubelet[2539]: I1213 14:11:34.957235 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-host-proc-sys-net\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957270 kubelet[2539]: I1213 14:11:34.957252 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m5f2\" (UniqueName: \"kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-kube-api-access-8m5f2\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957357 kubelet[2539]: I1213 14:11:34.957289 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f79bd496-764f-48c8-859e-4b276f114fa8-cilium-config-path\") pod \"f79bd496-764f-48c8-859e-4b276f114fa8\" (UID: \"f79bd496-764f-48c8-859e-4b276f114fa8\") " Dec 13 14:11:34.957357 kubelet[2539]: I1213 14:11:34.957308 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-config-path\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957357 kubelet[2539]: I1213 14:11:34.957322 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-cgroup\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957357 kubelet[2539]: I1213 14:11:34.957338 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-bpf-maps\") pod \"a74c2369-5725-4d5d-a162-08efba7c14c0\" (UID: \"a74c2369-5725-4d5d-a162-08efba7c14c0\") " Dec 13 14:11:34.957442 kubelet[2539]: I1213 14:11:34.957383 2539 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-etc-cni-netd\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:34.957442 kubelet[2539]: I1213 14:11:34.957394 2539 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-lib-modules\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:34.957442 kubelet[2539]: I1213 14:11:34.957416 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.957508 kubelet[2539]: I1213 14:11:34.957445 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.957508 kubelet[2539]: I1213 14:11:34.957460 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.957508 kubelet[2539]: I1213 14:11:34.957474 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.958055 kubelet[2539]: I1213 14:11:34.958029 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.958132 kubelet[2539]: I1213 14:11:34.958063 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.958132 kubelet[2539]: I1213 14:11:34.958092 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.962247 kubelet[2539]: I1213 14:11:34.962201 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79bd496-764f-48c8-859e-4b276f114fa8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f79bd496-764f-48c8-859e-4b276f114fa8" (UID: "f79bd496-764f-48c8-859e-4b276f114fa8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:11:34.962851 kubelet[2539]: I1213 14:11:34.962826 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:34.963016 kubelet[2539]: I1213 14:11:34.963000 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74c2369-5725-4d5d-a162-08efba7c14c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:34.963967 kubelet[2539]: I1213 14:11:34.963926 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79bd496-764f-48c8-859e-4b276f114fa8-kube-api-access-q6xcs" (OuterVolumeSpecName: "kube-api-access-q6xcs") pod "f79bd496-764f-48c8-859e-4b276f114fa8" (UID: "f79bd496-764f-48c8-859e-4b276f114fa8"). InnerVolumeSpecName "kube-api-access-q6xcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:34.964124 kubelet[2539]: I1213 14:11:34.964106 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:34.964902 kubelet[2539]: I1213 14:11:34.964749 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:11:34.965795 kubelet[2539]: I1213 14:11:34.965746 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-kube-api-access-8m5f2" (OuterVolumeSpecName: "kube-api-access-8m5f2") pod "a74c2369-5725-4d5d-a162-08efba7c14c0" (UID: "a74c2369-5725-4d5d-a162-08efba7c14c0"). InnerVolumeSpecName "kube-api-access-8m5f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:35.057987 kubelet[2539]: I1213 14:11:35.057955 2539 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058151 kubelet[2539]: I1213 14:11:35.058139 2539 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q6xcs\" (UniqueName: \"kubernetes.io/projected/f79bd496-764f-48c8-859e-4b276f114fa8-kube-api-access-q6xcs\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058224 kubelet[2539]: I1213 14:11:35.058214 2539 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cni-path\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058294 kubelet[2539]: I1213 14:11:35.058281 2539 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-run\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058361 kubelet[2539]: I1213 14:11:35.058351 2539 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-hostproc\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058430 kubelet[2539]: I1213 14:11:35.058421 2539 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a74c2369-5725-4d5d-a162-08efba7c14c0-clustermesh-secrets\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058562 kubelet[2539]: I1213 14:11:35.058477 2539 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-xtables-lock\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058646 kubelet[2539]: I1213 14:11:35.058633 2539 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f79bd496-764f-48c8-859e-4b276f114fa8-cilium-config-path\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058714 kubelet[2539]: I1213 14:11:35.058704 2539 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-host-proc-sys-net\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058795 kubelet[2539]: I1213 14:11:35.058784 2539 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8m5f2\" (UniqueName: \"kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-kube-api-access-8m5f2\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058893 kubelet[2539]: I1213 14:11:35.058880 2539 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-config-path\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.058960 kubelet[2539]: I1213 14:11:35.058951 2539 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-cilium-cgroup\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.059038 kubelet[2539]: I1213 14:11:35.059006 2539 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a74c2369-5725-4d5d-a162-08efba7c14c0-bpf-maps\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.059107 kubelet[2539]: I1213 14:11:35.059089 2539 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a74c2369-5725-4d5d-a162-08efba7c14c0-hubble-tls\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:35.318508 kubelet[2539]: I1213 14:11:35.318406 2539 scope.go:117] "RemoveContainer" containerID="ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f" Dec 13 14:11:35.321249 env[1446]: time="2024-12-13T14:11:35.321204375Z" level=info msg="RemoveContainer for \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\"" Dec 13 14:11:35.331360 systemd[1]: Removed slice kubepods-besteffort-podf79bd496_764f_48c8_859e_4b276f114fa8.slice. Dec 13 14:11:35.334395 env[1446]: time="2024-12-13T14:11:35.334346780Z" level=info msg="RemoveContainer for \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\" returns successfully" Dec 13 14:11:35.335277 systemd[1]: Removed slice kubepods-burstable-poda74c2369_5725_4d5d_a162_08efba7c14c0.slice. Dec 13 14:11:35.335356 systemd[1]: kubepods-burstable-poda74c2369_5725_4d5d_a162_08efba7c14c0.slice: Consumed 6.381s CPU time. Dec 13 14:11:35.336803 kubelet[2539]: I1213 14:11:35.336578 2539 scope.go:117] "RemoveContainer" containerID="ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f" Dec 13 14:11:35.337057 env[1446]: time="2024-12-13T14:11:35.336964613Z" level=error msg="ContainerStatus for \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\": not found" Dec 13 14:11:35.337235 kubelet[2539]: E1213 14:11:35.337215 2539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\": not found" containerID="ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f" Dec 13 14:11:35.337426 kubelet[2539]: I1213 14:11:35.337331 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f"} err="failed to get container status \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad5134323e28091efa35d2eadbd827fc1e6aa49859955020e08c14ce0c3c633f\": not found" Dec 13 14:11:35.337516 kubelet[2539]: I1213 14:11:35.337504 2539 scope.go:117] "RemoveContainer" containerID="c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116" Dec 13 14:11:35.340130 env[1446]: time="2024-12-13T14:11:35.339116927Z" level=info msg="RemoveContainer for \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\"" Dec 13 14:11:35.353820 env[1446]: time="2024-12-13T14:11:35.353778528Z" level=info msg="RemoveContainer for \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\" returns successfully" Dec 13 14:11:35.354241 kubelet[2539]: I1213 14:11:35.354208 2539 scope.go:117] "RemoveContainer" containerID="164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534" Dec 13 14:11:35.355418 env[1446]: time="2024-12-13T14:11:35.355390404Z" level=info msg="RemoveContainer for \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\"" Dec 13 14:11:35.377361 env[1446]: time="2024-12-13T14:11:35.377314545Z" level=info msg="RemoveContainer for \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\" returns successfully" Dec 13 14:11:35.377774 kubelet[2539]: I1213 14:11:35.377733 2539 scope.go:117] "RemoveContainer" containerID="91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9" Dec 13 14:11:35.378962 env[1446]: time="2024-12-13T14:11:35.378933020Z" level=info msg="RemoveContainer for \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\"" Dec 13 14:11:35.385920 env[1446]: time="2024-12-13T14:11:35.385883442Z" level=info msg="RemoveContainer for \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\" returns successfully" Dec 13 14:11:35.386118 kubelet[2539]: I1213 14:11:35.386094 2539 scope.go:117] "RemoveContainer" containerID="57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d" Dec 13 14:11:35.387138 env[1446]: time="2024-12-13T14:11:35.387109078Z" level=info msg="RemoveContainer for \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\"" Dec 13 14:11:35.394875 env[1446]: time="2024-12-13T14:11:35.394838698Z" level=info msg="RemoveContainer for \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\" returns successfully" Dec 13 14:11:35.395060 kubelet[2539]: I1213 14:11:35.395036 2539 scope.go:117] "RemoveContainer" containerID="600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0" Dec 13 14:11:35.396162 env[1446]: time="2024-12-13T14:11:35.396105214Z" level=info msg="RemoveContainer for \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\"" Dec 13 14:11:35.403976 env[1446]: time="2024-12-13T14:11:35.403946073Z" level=info msg="RemoveContainer for \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\" returns successfully" Dec 13 14:11:35.404351 kubelet[2539]: I1213 14:11:35.404320 2539 scope.go:117] "RemoveContainer" containerID="c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116" Dec 13 14:11:35.404574 env[1446]: time="2024-12-13T14:11:35.404517232Z" level=error msg="ContainerStatus for \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\": not found" Dec 13 14:11:35.404713 kubelet[2539]: E1213 14:11:35.404667 2539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\": not found" containerID="c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116" Dec 13 14:11:35.404753 kubelet[2539]: I1213 14:11:35.404717 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116"} err="failed to get container status \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\": rpc error: code = NotFound desc = an error occurred when try to find container \"c86241229f2544f81de40e930a2fdaaa93b294a89e1addf7b08bceeeecfa0116\": not found" Dec 13 14:11:35.404753 kubelet[2539]: I1213 14:11:35.404737 2539 scope.go:117] "RemoveContainer" containerID="164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534" Dec 13 14:11:35.404997 env[1446]: time="2024-12-13T14:11:35.404950711Z" level=error msg="ContainerStatus for \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\": not found" Dec 13 14:11:35.405265 kubelet[2539]: E1213 14:11:35.405170 2539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\": not found" containerID="164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534" Dec 13 14:11:35.405265 kubelet[2539]: I1213 14:11:35.405192 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534"} err="failed to get container status \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\": rpc error: code = NotFound desc = an error occurred when try to find container \"164891218eab28c511dfdd315e4d0422bcc12a8c9c015d9594bfc4d459b0e534\": not found" Dec 13 14:11:35.405265 kubelet[2539]: I1213 14:11:35.405206 2539 scope.go:117] "RemoveContainer" containerID="91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9" Dec 13 14:11:35.405544 env[1446]: time="2024-12-13T14:11:35.405501709Z" level=error msg="ContainerStatus for \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\": not found" Dec 13 14:11:35.405717 kubelet[2539]: E1213 14:11:35.405694 2539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\": not found" containerID="91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9" Dec 13 14:11:35.405798 kubelet[2539]: I1213 14:11:35.405719 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9"} err="failed to get container status \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\": rpc error: code = NotFound desc = an error occurred when try to find container \"91d1a97b8cf9a8407937d89aeea7c6c7814223bff34dd1bf7c5f6104a393eea9\": not found" Dec 13 14:11:35.405798 kubelet[2539]: I1213 14:11:35.405784 2539 scope.go:117] "RemoveContainer" containerID="57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d" Dec 13 14:11:35.406021 env[1446]: time="2024-12-13T14:11:35.405978908Z" level=error msg="ContainerStatus for \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\": not found" Dec 13 14:11:35.406202 kubelet[2539]: E1213 14:11:35.406175 2539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\": not found" containerID="57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d" Dec 13 14:11:35.406253 kubelet[2539]: I1213 14:11:35.406200 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d"} err="failed to get container status \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\": rpc error: code = NotFound desc = an error occurred when try to find container \"57b08acca32171865f9752c1d6200f9e5a4e24ac1df8fa53d9d2e3b7258a480d\": not found" Dec 13 14:11:35.406253 kubelet[2539]: I1213 14:11:35.406224 2539 scope.go:117] "RemoveContainer" containerID="600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0" Dec 13 14:11:35.406459 env[1446]: time="2024-12-13T14:11:35.406417827Z" level=error msg="ContainerStatus for \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\": not found" Dec 13 14:11:35.406618 kubelet[2539]: E1213 14:11:35.406599 2539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\": not found" containerID="600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0" Dec 13 14:11:35.406677 kubelet[2539]: I1213 14:11:35.406620 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0"} err="failed to get container status \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\": rpc error: code = NotFound desc = an error occurred when try to find container \"600a6fa237d35226bd413ea82e47489f31b141f4ca3c239b3d63b60d18e75ee0\": not found" Dec 13 14:11:35.589688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6-rootfs.mount: Deactivated successfully. Dec 13 14:11:35.589802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21e5189e7cd2904cc01f66d6bf7bb8f694a8f03d80f3e63158ed515fbc67b4f6-shm.mount: Deactivated successfully. Dec 13 14:11:35.589863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9bfe4064e50c3255c50586b92b0e7eeb61382f60e74d9921c0f5616baed6faa-rootfs.mount: Deactivated successfully. Dec 13 14:11:35.589909 systemd[1]: var-lib-kubelet-pods-f79bd496\x2d764f\x2d48c8\x2d859e\x2d4b276f114fa8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6xcs.mount: Deactivated successfully. Dec 13 14:11:35.589960 systemd[1]: var-lib-kubelet-pods-a74c2369\x2d5725\x2d4d5d\x2da162\x2d08efba7c14c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8m5f2.mount: Deactivated successfully. Dec 13 14:11:35.590009 systemd[1]: var-lib-kubelet-pods-a74c2369\x2d5725\x2d4d5d\x2da162\x2d08efba7c14c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:11:35.590057 systemd[1]: var-lib-kubelet-pods-a74c2369\x2d5725\x2d4d5d\x2da162\x2d08efba7c14c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:35.927171 kubelet[2539]: I1213 14:11:35.927136 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74c2369-5725-4d5d-a162-08efba7c14c0" path="/var/lib/kubelet/pods/a74c2369-5725-4d5d-a162-08efba7c14c0/volumes" Dec 13 14:11:35.932513 kubelet[2539]: I1213 14:11:35.932469 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79bd496-764f-48c8-859e-4b276f114fa8" path="/var/lib/kubelet/pods/f79bd496-764f-48c8-859e-4b276f114fa8/volumes" Dec 13 14:11:36.615490 sshd[4083]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:36.618253 systemd[1]: sshd@20-10.200.20.32:22-10.200.16.10:56500.service: Deactivated successfully. Dec 13 14:11:36.618994 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:11:36.619159 systemd[1]: session-23.scope: Consumed 1.094s CPU time. Dec 13 14:11:36.619551 systemd-logind[1434]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:11:36.620458 systemd-logind[1434]: Removed session 23. Dec 13 14:11:36.683424 systemd[1]: Started sshd@21-10.200.20.32:22-10.200.16.10:56502.service. Dec 13 14:11:37.000413 kubelet[2539]: E1213 14:11:37.000368 2539 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:11:37.085842 sshd[4254]: Accepted publickey for core from 10.200.16.10 port 56502 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:37.086854 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:37.091139 systemd[1]: Started session-24.scope. Dec 13 14:11:37.092425 systemd-logind[1434]: New session 24 of user core. Dec 13 14:11:38.731101 kubelet[2539]: I1213 14:11:38.731032 2539 topology_manager.go:215] "Topology Admit Handler" podUID="4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" podNamespace="kube-system" podName="cilium-5frnm" Dec 13 14:11:38.731101 kubelet[2539]: E1213 14:11:38.731112 2539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a74c2369-5725-4d5d-a162-08efba7c14c0" containerName="mount-cgroup" Dec 13 14:11:38.731482 kubelet[2539]: E1213 14:11:38.731122 2539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a74c2369-5725-4d5d-a162-08efba7c14c0" containerName="cilium-agent" Dec 13 14:11:38.731482 kubelet[2539]: E1213 14:11:38.731128 2539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f79bd496-764f-48c8-859e-4b276f114fa8" containerName="cilium-operator" Dec 13 14:11:38.731482 kubelet[2539]: E1213 14:11:38.731134 2539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a74c2369-5725-4d5d-a162-08efba7c14c0" containerName="apply-sysctl-overwrites" Dec 13 14:11:38.731482 kubelet[2539]: E1213 14:11:38.731140 2539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a74c2369-5725-4d5d-a162-08efba7c14c0" containerName="mount-bpf-fs" Dec 13 14:11:38.731482 kubelet[2539]: E1213 14:11:38.731146 2539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a74c2369-5725-4d5d-a162-08efba7c14c0" containerName="clean-cilium-state" Dec 13 14:11:38.731482 kubelet[2539]: I1213 14:11:38.731168 2539 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74c2369-5725-4d5d-a162-08efba7c14c0" containerName="cilium-agent" Dec 13 14:11:38.731482 kubelet[2539]: I1213 14:11:38.731174 2539 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79bd496-764f-48c8-859e-4b276f114fa8" containerName="cilium-operator" Dec 13 14:11:38.737154 systemd[1]: Created slice kubepods-burstable-pod4578e6bb_71ee_4ca9_98e2_e4f81df8d11e.slice. Dec 13 14:11:38.800941 sshd[4254]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:38.803969 systemd[1]: sshd@21-10.200.20.32:22-10.200.16.10:56502.service: Deactivated successfully. Dec 13 14:11:38.804673 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:11:38.804891 systemd[1]: session-24.scope: Consumed 1.316s CPU time. Dec 13 14:11:38.805069 systemd-logind[1434]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:11:38.805708 systemd-logind[1434]: Removed session 24. Dec 13 14:11:38.870112 systemd[1]: Started sshd@22-10.200.20.32:22-10.200.16.10:46708.service. Dec 13 14:11:38.877579 kubelet[2539]: I1213 14:11:38.877549 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-bpf-maps\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.877731 kubelet[2539]: I1213 14:11:38.877714 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-lib-modules\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.877844 kubelet[2539]: I1213 14:11:38.877820 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-hostproc\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.877931 kubelet[2539]: I1213 14:11:38.877919 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-cgroup\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878001 kubelet[2539]: I1213 14:11:38.877990 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-host-proc-sys-kernel\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878085 kubelet[2539]: I1213 14:11:38.878071 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjrq\" (UniqueName: \"kubernetes.io/projected/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-kube-api-access-dpjrq\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878165 kubelet[2539]: I1213 14:11:38.878153 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-run\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878243 kubelet[2539]: I1213 14:11:38.878231 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-clustermesh-secrets\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878365 kubelet[2539]: I1213 14:11:38.878352 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-hubble-tls\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878443 kubelet[2539]: I1213 14:11:38.878432 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cni-path\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878512 kubelet[2539]: I1213 14:11:38.878500 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-ipsec-secrets\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878578 kubelet[2539]: I1213 14:11:38.878562 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-etc-cni-netd\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878640 kubelet[2539]: I1213 14:11:38.878629 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-xtables-lock\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878709 kubelet[2539]: I1213 14:11:38.878696 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-config-path\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:38.878804 kubelet[2539]: I1213 14:11:38.878791 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-host-proc-sys-net\") pod \"cilium-5frnm\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " pod="kube-system/cilium-5frnm" Dec 13 14:11:39.041697 env[1446]: time="2024-12-13T14:11:39.041009923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5frnm,Uid:4578e6bb-71ee-4ca9-98e2-e4f81df8d11e,Namespace:kube-system,Attempt:0,}" Dec 13 14:11:39.082603 env[1446]: time="2024-12-13T14:11:39.082519974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:39.082603 env[1446]: time="2024-12-13T14:11:39.082561374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:39.082864 env[1446]: time="2024-12-13T14:11:39.082825493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:39.084027 env[1446]: time="2024-12-13T14:11:39.083976290Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94 pid=4276 runtime=io.containerd.runc.v2 Dec 13 14:11:39.093663 systemd[1]: Started cri-containerd-4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94.scope. Dec 13 14:11:39.118416 env[1446]: time="2024-12-13T14:11:39.118377080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5frnm,Uid:4578e6bb-71ee-4ca9-98e2-e4f81df8d11e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94\"" Dec 13 14:11:39.122746 env[1446]: time="2024-12-13T14:11:39.122715429Z" level=info msg="CreateContainer within sandbox \"4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:11:39.153919 env[1446]: time="2024-12-13T14:11:39.153876827Z" level=info msg="CreateContainer within sandbox \"4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8\"" Dec 13 14:11:39.154906 env[1446]: time="2024-12-13T14:11:39.154879585Z" level=info msg="StartContainer for \"b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8\"" Dec 13 14:11:39.170311 systemd[1]: Started cri-containerd-b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8.scope. Dec 13 14:11:39.182616 systemd[1]: cri-containerd-b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8.scope: Deactivated successfully. Dec 13 14:11:39.223314 env[1446]: time="2024-12-13T14:11:39.223267085Z" level=info msg="shim disconnected" id=b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8 Dec 13 14:11:39.223535 env[1446]: time="2024-12-13T14:11:39.223518324Z" level=warning msg="cleaning up after shim disconnected" id=b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8 namespace=k8s.io Dec 13 14:11:39.223610 env[1446]: time="2024-12-13T14:11:39.223597604Z" level=info msg="cleaning up dead shim" Dec 13 14:11:39.230819 env[1446]: time="2024-12-13T14:11:39.230752746Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4337 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:11:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:11:39.231248 env[1446]: time="2024-12-13T14:11:39.231156184Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Dec 13 14:11:39.231416 env[1446]: time="2024-12-13T14:11:39.231386944Z" level=error msg="Failed to pipe stdout of container \"b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8\"" error="reading from a closed fifo" Dec 13 14:11:39.233382 env[1446]: time="2024-12-13T14:11:39.233348539Z" level=error msg="Failed to pipe stderr of container \"b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8\"" error="reading from a closed fifo" Dec 13 14:11:39.237570 env[1446]: time="2024-12-13T14:11:39.237517288Z" level=error msg="StartContainer for \"b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:11:39.237796 kubelet[2539]: E1213 14:11:39.237738 2539 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8" Dec 13 14:11:39.237919 kubelet[2539]: E1213 14:11:39.237898 2539 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:11:39.237919 kubelet[2539]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:11:39.237919 kubelet[2539]: rm /hostbin/cilium-mount Dec 13 14:11:39.238013 kubelet[2539]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpjrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-5frnm_kube-system(4578e6bb-71ee-4ca9-98e2-e4f81df8d11e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:11:39.238013 kubelet[2539]: E1213 14:11:39.237933 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5frnm" podUID="4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" Dec 13 14:11:39.280377 sshd[4264]: Accepted publickey for core from 10.200.16.10 port 46708 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:39.281791 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:39.286421 systemd[1]: Started session-25.scope. Dec 13 14:11:39.286796 systemd-logind[1434]: New session 25 of user core. Dec 13 14:11:39.346286 env[1446]: time="2024-12-13T14:11:39.346088883Z" level=info msg="CreateContainer within sandbox \"4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 14:11:39.372328 env[1446]: time="2024-12-13T14:11:39.372284254Z" level=info msg="CreateContainer within sandbox \"4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be\"" Dec 13 14:11:39.373212 env[1446]: time="2024-12-13T14:11:39.373186412Z" level=info msg="StartContainer for \"588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be\"" Dec 13 14:11:39.387554 systemd[1]: Started cri-containerd-588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be.scope. Dec 13 14:11:39.398778 systemd[1]: cri-containerd-588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be.scope: Deactivated successfully. Dec 13 14:11:39.415433 env[1446]: time="2024-12-13T14:11:39.415376181Z" level=info msg="shim disconnected" id=588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be Dec 13 14:11:39.415433 env[1446]: time="2024-12-13T14:11:39.415427941Z" level=warning msg="cleaning up after shim disconnected" id=588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be namespace=k8s.io Dec 13 14:11:39.415433 env[1446]: time="2024-12-13T14:11:39.415436701Z" level=info msg="cleaning up dead shim" Dec 13 14:11:39.428197 env[1446]: time="2024-12-13T14:11:39.428135228Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4376 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:11:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:11:39.428441 env[1446]: time="2024-12-13T14:11:39.428380987Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Dec 13 14:11:39.428670 env[1446]: time="2024-12-13T14:11:39.428639507Z" level=error msg="Failed to pipe stdout of container \"588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be\"" error="reading from a closed fifo" Dec 13 14:11:39.428745 env[1446]: time="2024-12-13T14:11:39.428713546Z" level=error msg="Failed to pipe stderr of container \"588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be\"" error="reading from a closed fifo" Dec 13 14:11:39.432451 env[1446]: time="2024-12-13T14:11:39.432407497Z" level=error msg="StartContainer for \"588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:11:39.432662 kubelet[2539]: E1213 14:11:39.432624 2539 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be" Dec 13 14:11:39.432807 kubelet[2539]: E1213 14:11:39.432787 2539 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:11:39.432807 kubelet[2539]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:11:39.432807 kubelet[2539]: rm /hostbin/cilium-mount Dec 13 14:11:39.432807 kubelet[2539]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpjrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-5frnm_kube-system(4578e6bb-71ee-4ca9-98e2-e4f81df8d11e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:11:39.433014 kubelet[2539]: E1213 14:11:39.432822 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5frnm" podUID="4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" Dec 13 14:11:39.666078 sshd[4264]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:39.668310 systemd[1]: sshd@22-10.200.20.32:22-10.200.16.10:46708.service: Deactivated successfully. Dec 13 14:11:39.669021 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:11:39.669582 systemd-logind[1434]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:11:39.670541 systemd-logind[1434]: Removed session 25. Dec 13 14:11:39.735017 systemd[1]: Started sshd@23-10.200.20.32:22-10.200.16.10:46722.service. Dec 13 14:11:40.144620 sshd[4397]: Accepted publickey for core from 10.200.16.10 port 46722 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:40.146256 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:40.150517 systemd[1]: Started session-26.scope. Dec 13 14:11:40.151025 systemd-logind[1434]: New session 26 of user core. Dec 13 14:11:40.340896 kubelet[2539]: I1213 14:11:40.339897 2539 scope.go:117] "RemoveContainer" containerID="b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8" Dec 13 14:11:40.341452 env[1446]: time="2024-12-13T14:11:40.341422358Z" level=info msg="StopPodSandbox for \"4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94\"" Dec 13 14:11:40.341847 env[1446]: time="2024-12-13T14:11:40.341815117Z" level=info msg="Container to stop \"b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:40.341960 env[1446]: time="2024-12-13T14:11:40.341941157Z" level=info msg="Container to stop \"588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:40.343746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94-shm.mount: Deactivated successfully. Dec 13 14:11:40.348116 env[1446]: time="2024-12-13T14:11:40.348086901Z" level=info msg="RemoveContainer for \"b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8\"" Dec 13 14:11:40.354373 systemd[1]: cri-containerd-4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94.scope: Deactivated successfully. Dec 13 14:11:40.363885 env[1446]: time="2024-12-13T14:11:40.363730260Z" level=info msg="RemoveContainer for \"b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8\" returns successfully" Dec 13 14:11:40.376261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94-rootfs.mount: Deactivated successfully. Dec 13 14:11:40.396196 env[1446]: time="2024-12-13T14:11:40.396046255Z" level=info msg="shim disconnected" id=4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94 Dec 13 14:11:40.396196 env[1446]: time="2024-12-13T14:11:40.396108055Z" level=warning msg="cleaning up after shim disconnected" id=4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94 namespace=k8s.io Dec 13 14:11:40.396196 env[1446]: time="2024-12-13T14:11:40.396117175Z" level=info msg="cleaning up dead shim" Dec 13 14:11:40.408337 env[1446]: time="2024-12-13T14:11:40.408289624Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4422 runtime=io.containerd.runc.v2\n" Dec 13 14:11:40.408636 env[1446]: time="2024-12-13T14:11:40.408594943Z" level=info msg="TearDown network for sandbox \"4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94\" successfully" Dec 13 14:11:40.408691 env[1446]: time="2024-12-13T14:11:40.408637343Z" level=info msg="StopPodSandbox for \"4b79b9ff85002d78bc9b73077013fca3009a4d06fc349dc372254c80eef10f94\" returns successfully" Dec 13 14:11:40.589578 kubelet[2539]: I1213 14:11:40.589549 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cni-path\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.589900 kubelet[2539]: I1213 14:11:40.589887 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-lib-modules\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.590003 kubelet[2539]: I1213 14:11:40.589992 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-etc-cni-netd\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.590139 kubelet[2539]: I1213 14:11:40.590129 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-bpf-maps\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.590231 kubelet[2539]: I1213 14:11:40.590220 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-host-proc-sys-kernel\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.590347 kubelet[2539]: I1213 14:11:40.590338 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-run\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.590554 kubelet[2539]: I1213 14:11:40.590543 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-hubble-tls\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.590740 kubelet[2539]: I1213 14:11:40.590728 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-cgroup\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.590850 kubelet[2539]: I1213 14:11:40.590836 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-config-path\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.590941 kubelet[2539]: I1213 14:11:40.590930 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpjrq\" (UniqueName: \"kubernetes.io/projected/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-kube-api-access-dpjrq\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.591135 kubelet[2539]: I1213 14:11:40.591123 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-host-proc-sys-net\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.591246 kubelet[2539]: I1213 14:11:40.591233 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-hostproc\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.591896 kubelet[2539]: I1213 14:11:40.591872 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-clustermesh-secrets\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.591959 kubelet[2539]: I1213 14:11:40.591903 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-ipsec-secrets\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.591959 kubelet[2539]: I1213 14:11:40.591925 2539 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-xtables-lock\") pod \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\" (UID: \"4578e6bb-71ee-4ca9-98e2-e4f81df8d11e\") " Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.589820 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cni-path" (OuterVolumeSpecName: "cni-path") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.590088 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.590104 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.590302 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.590315 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.590506 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.591246 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.591817 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-hostproc" (OuterVolumeSpecName: "hostproc") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.591839 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.592065 kubelet[2539]: I1213 14:11:40.591970 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:40.594081 kubelet[2539]: I1213 14:11:40.594054 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:11:40.597530 systemd[1]: var-lib-kubelet-pods-4578e6bb\x2d71ee\x2d4ca9\x2d98e2\x2de4f81df8d11e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddpjrq.mount: Deactivated successfully. Dec 13 14:11:40.599543 systemd[1]: var-lib-kubelet-pods-4578e6bb\x2d71ee\x2d4ca9\x2d98e2\x2de4f81df8d11e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:11:40.602262 kubelet[2539]: I1213 14:11:40.602215 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:40.602364 kubelet[2539]: I1213 14:11:40.602320 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:40.602400 kubelet[2539]: I1213 14:11:40.602379 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-kube-api-access-dpjrq" (OuterVolumeSpecName: "kube-api-access-dpjrq") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "kube-api-access-dpjrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:40.604203 kubelet[2539]: I1213 14:11:40.604178 2539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" (UID: "4578e6bb-71ee-4ca9-98e2-e4f81df8d11e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:40.692631 kubelet[2539]: I1213 14:11:40.692528 2539 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-lib-modules\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.692868 kubelet[2539]: I1213 14:11:40.692853 2539 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-etc-cni-netd\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.692957 kubelet[2539]: I1213 14:11:40.692944 2539 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693026 kubelet[2539]: I1213 14:11:40.693015 2539 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-run\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693085 kubelet[2539]: I1213 14:11:40.693075 2539 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-hubble-tls\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693143 kubelet[2539]: I1213 14:11:40.693133 2539 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-bpf-maps\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693200 kubelet[2539]: I1213 14:11:40.693190 2539 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-config-path\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693263 kubelet[2539]: I1213 14:11:40.693250 2539 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-cgroup\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693325 kubelet[2539]: I1213 14:11:40.693314 2539 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dpjrq\" (UniqueName: \"kubernetes.io/projected/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-kube-api-access-dpjrq\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693384 kubelet[2539]: I1213 14:11:40.693373 2539 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-host-proc-sys-net\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693442 kubelet[2539]: I1213 14:11:40.693431 2539 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-clustermesh-secrets\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693500 kubelet[2539]: I1213 14:11:40.693489 2539 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693558 kubelet[2539]: I1213 14:11:40.693547 2539 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-xtables-lock\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693610 kubelet[2539]: I1213 14:11:40.693601 2539 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-hostproc\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.693670 kubelet[2539]: I1213 14:11:40.693661 2539 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e-cni-path\") on node \"ci-3510.3.6-a-fa37d69d59\" DevicePath \"\"" Dec 13 14:11:40.984252 systemd[1]: var-lib-kubelet-pods-4578e6bb\x2d71ee\x2d4ca9\x2d98e2\x2de4f81df8d11e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:40.984347 systemd[1]: var-lib-kubelet-pods-4578e6bb\x2d71ee\x2d4ca9\x2d98e2\x2de4f81df8d11e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:41.343062 kubelet[2539]: I1213 14:11:41.342723 2539 scope.go:117] "RemoveContainer" containerID="588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be" Dec 13 14:11:41.345466 env[1446]: time="2024-12-13T14:11:41.344519908Z" level=info msg="RemoveContainer for \"588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be\"" Dec 13 14:11:41.347543 systemd[1]: Removed slice kubepods-burstable-pod4578e6bb_71ee_4ca9_98e2_e4f81df8d11e.slice. Dec 13 14:11:41.356938 env[1446]: time="2024-12-13T14:11:41.354072363Z" level=info msg="RemoveContainer for \"588a9ee55f1d48e4a287f03a2d197e9004e520f6abf637edbdb9c662f74b91be\" returns successfully" Dec 13 14:11:41.385251 kubelet[2539]: I1213 14:11:41.385198 2539 topology_manager.go:215] "Topology Admit Handler" podUID="c983bf57-0ccc-43c8-97c5-0c0e00b711a8" podNamespace="kube-system" podName="cilium-s65cm" Dec 13 14:11:41.385251 kubelet[2539]: E1213 14:11:41.385261 2539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" containerName="mount-cgroup" Dec 13 14:11:41.385435 kubelet[2539]: E1213 14:11:41.385271 2539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" containerName="mount-cgroup" Dec 13 14:11:41.385435 kubelet[2539]: I1213 14:11:41.385292 2539 memory_manager.go:354] "RemoveStaleState removing state" podUID="4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" containerName="mount-cgroup" Dec 13 14:11:41.385435 kubelet[2539]: I1213 14:11:41.385311 2539 memory_manager.go:354] "RemoveStaleState removing state" podUID="4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" containerName="mount-cgroup" Dec 13 14:11:41.390320 systemd[1]: Created slice kubepods-burstable-podc983bf57_0ccc_43c8_97c5_0c0e00b711a8.slice. Dec 13 14:11:41.500961 kubelet[2539]: I1213 14:11:41.500927 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-cilium-ipsec-secrets\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501159 kubelet[2539]: I1213 14:11:41.501141 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-xtables-lock\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501244 kubelet[2539]: I1213 14:11:41.501232 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-etc-cni-netd\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501314 kubelet[2539]: I1213 14:11:41.501302 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-cilium-config-path\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501385 kubelet[2539]: I1213 14:11:41.501373 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-bpf-maps\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501449 kubelet[2539]: I1213 14:11:41.501438 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-hostproc\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501516 kubelet[2539]: I1213 14:11:41.501504 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-clustermesh-secrets\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501590 kubelet[2539]: I1213 14:11:41.501578 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-hubble-tls\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501659 kubelet[2539]: I1213 14:11:41.501646 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-lib-modules\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501753 kubelet[2539]: I1213 14:11:41.501740 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-host-proc-sys-net\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501867 kubelet[2539]: I1213 14:11:41.501844 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-host-proc-sys-kernel\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.501942 kubelet[2539]: I1213 14:11:41.501931 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-cilium-cgroup\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.502017 kubelet[2539]: I1213 14:11:41.502004 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-cni-path\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.502086 kubelet[2539]: I1213 14:11:41.502074 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prkcz\" (UniqueName: \"kubernetes.io/projected/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-kube-api-access-prkcz\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.502161 kubelet[2539]: I1213 14:11:41.502148 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c983bf57-0ccc-43c8-97c5-0c0e00b711a8-cilium-run\") pod \"cilium-s65cm\" (UID: \"c983bf57-0ccc-43c8-97c5-0c0e00b711a8\") " pod="kube-system/cilium-s65cm" Dec 13 14:11:41.693801 env[1446]: time="2024-12-13T14:11:41.693736562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s65cm,Uid:c983bf57-0ccc-43c8-97c5-0c0e00b711a8,Namespace:kube-system,Attempt:0,}" Dec 13 14:11:41.728653 env[1446]: time="2024-12-13T14:11:41.728587592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:41.729111 env[1446]: time="2024-12-13T14:11:41.729074951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:41.729236 env[1446]: time="2024-12-13T14:11:41.729213870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:41.729484 env[1446]: time="2024-12-13T14:11:41.729454190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d pid=4456 runtime=io.containerd.runc.v2 Dec 13 14:11:41.740862 systemd[1]: Started cri-containerd-a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d.scope. Dec 13 14:11:41.762313 env[1446]: time="2024-12-13T14:11:41.762269465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s65cm,Uid:c983bf57-0ccc-43c8-97c5-0c0e00b711a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\"" Dec 13 14:11:41.766841 env[1446]: time="2024-12-13T14:11:41.766738213Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:11:41.798709 env[1446]: time="2024-12-13T14:11:41.798655650Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240\"" Dec 13 14:11:41.800019 env[1446]: time="2024-12-13T14:11:41.799199609Z" level=info msg="StartContainer for \"218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240\"" Dec 13 14:11:41.814104 systemd[1]: Started cri-containerd-218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240.scope. Dec 13 14:11:41.843839 env[1446]: time="2024-12-13T14:11:41.843791293Z" level=info msg="StartContainer for \"218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240\" returns successfully" Dec 13 14:11:41.846721 systemd[1]: cri-containerd-218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240.scope: Deactivated successfully. Dec 13 14:11:41.913180 env[1446]: time="2024-12-13T14:11:41.913132794Z" level=info msg="shim disconnected" id=218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240 Dec 13 14:11:41.913435 env[1446]: time="2024-12-13T14:11:41.913415833Z" level=warning msg="cleaning up after shim disconnected" id=218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240 namespace=k8s.io Dec 13 14:11:41.913499 env[1446]: time="2024-12-13T14:11:41.913486833Z" level=info msg="cleaning up dead shim" Dec 13 14:11:41.920656 env[1446]: time="2024-12-13T14:11:41.920614774Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4538 runtime=io.containerd.runc.v2\n" Dec 13 14:11:41.926437 kubelet[2539]: I1213 14:11:41.926164 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4578e6bb-71ee-4ca9-98e2-e4f81df8d11e" path="/var/lib/kubelet/pods/4578e6bb-71ee-4ca9-98e2-e4f81df8d11e/volumes" Dec 13 14:11:42.001595 kubelet[2539]: E1213 14:11:42.001474 2539 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:11:42.327360 kubelet[2539]: W1213 14:11:42.327240 2539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4578e6bb_71ee_4ca9_98e2_e4f81df8d11e.slice/cri-containerd-b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8.scope WatchSource:0}: container "b0bb3281559385e9ec8b7b5efa3d0d3938d642d91ee292513b23c0a3b89e81f8" in namespace "k8s.io": not found Dec 13 14:11:42.350161 env[1446]: time="2024-12-13T14:11:42.350120226Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:11:42.379009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187592288.mount: Deactivated successfully. Dec 13 14:11:42.391735 env[1446]: time="2024-12-13T14:11:42.391674079Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b\"" Dec 13 14:11:42.392635 env[1446]: time="2024-12-13T14:11:42.392562476Z" level=info msg="StartContainer for \"2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b\"" Dec 13 14:11:42.408121 systemd[1]: Started cri-containerd-2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b.scope. Dec 13 14:11:42.446514 systemd[1]: cri-containerd-2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b.scope: Deactivated successfully. Dec 13 14:11:42.447368 env[1446]: time="2024-12-13T14:11:42.447330895Z" level=info msg="StartContainer for \"2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b\" returns successfully" Dec 13 14:11:42.482387 env[1446]: time="2024-12-13T14:11:42.482332605Z" level=info msg="shim disconnected" id=2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b Dec 13 14:11:42.482387 env[1446]: time="2024-12-13T14:11:42.482381805Z" level=warning msg="cleaning up after shim disconnected" id=2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b namespace=k8s.io Dec 13 14:11:42.482387 env[1446]: time="2024-12-13T14:11:42.482392085Z" level=info msg="cleaning up dead shim" Dec 13 14:11:42.492803 env[1446]: time="2024-12-13T14:11:42.492735498Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4599 runtime=io.containerd.runc.v2\n" Dec 13 14:11:42.984394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b-rootfs.mount: Deactivated successfully. Dec 13 14:11:43.356856 env[1446]: time="2024-12-13T14:11:43.355479240Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:11:43.382302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1602264381.mount: Deactivated successfully. Dec 13 14:11:43.388383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915686519.mount: Deactivated successfully. Dec 13 14:11:43.396796 env[1446]: time="2024-12-13T14:11:43.396736094Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8\"" Dec 13 14:11:43.398747 env[1446]: time="2024-12-13T14:11:43.398719009Z" level=info msg="StartContainer for \"c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8\"" Dec 13 14:11:43.415800 systemd[1]: Started cri-containerd-c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8.scope. Dec 13 14:11:43.448119 systemd[1]: cri-containerd-c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8.scope: Deactivated successfully. Dec 13 14:11:43.450516 env[1446]: time="2024-12-13T14:11:43.450479796Z" level=info msg="StartContainer for \"c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8\" returns successfully" Dec 13 14:11:43.486292 env[1446]: time="2024-12-13T14:11:43.486249144Z" level=info msg="shim disconnected" id=c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8 Dec 13 14:11:43.486587 env[1446]: time="2024-12-13T14:11:43.486566424Z" level=warning msg="cleaning up after shim disconnected" id=c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8 namespace=k8s.io Dec 13 14:11:43.486668 env[1446]: time="2024-12-13T14:11:43.486655143Z" level=info msg="cleaning up dead shim" Dec 13 14:11:43.495502 env[1446]: time="2024-12-13T14:11:43.495464721Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4654 runtime=io.containerd.runc.v2\n" Dec 13 14:11:44.357393 env[1446]: time="2024-12-13T14:11:44.357330517Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:11:44.400922 env[1446]: time="2024-12-13T14:11:44.400870926Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b\"" Dec 13 14:11:44.401966 env[1446]: time="2024-12-13T14:11:44.401939843Z" level=info msg="StartContainer for \"75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b\"" Dec 13 14:11:44.424842 systemd[1]: Started cri-containerd-75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b.scope. Dec 13 14:11:44.450071 systemd[1]: cri-containerd-75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b.scope: Deactivated successfully. Dec 13 14:11:44.451709 env[1446]: time="2024-12-13T14:11:44.451570997Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc983bf57_0ccc_43c8_97c5_0c0e00b711a8.slice/cri-containerd-75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b.scope/memory.events\": no such file or directory" Dec 13 14:11:44.456775 env[1446]: time="2024-12-13T14:11:44.456713943Z" level=info msg="StartContainer for \"75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b\" returns successfully" Dec 13 14:11:44.494083 env[1446]: time="2024-12-13T14:11:44.494031928Z" level=info msg="shim disconnected" id=75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b Dec 13 14:11:44.494083 env[1446]: time="2024-12-13T14:11:44.494078288Z" level=warning msg="cleaning up after shim disconnected" id=75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b namespace=k8s.io Dec 13 14:11:44.494083 env[1446]: time="2024-12-13T14:11:44.494090568Z" level=info msg="cleaning up dead shim" Dec 13 14:11:44.500513 env[1446]: time="2024-12-13T14:11:44.500463312Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4708 runtime=io.containerd.runc.v2\n" Dec 13 14:11:44.984513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b-rootfs.mount: Deactivated successfully. Dec 13 14:11:45.361255 env[1446]: time="2024-12-13T14:11:45.360847444Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:11:45.395610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793166783.mount: Deactivated successfully. Dec 13 14:11:45.401165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1694885146.mount: Deactivated successfully. Dec 13 14:11:45.428865 env[1446]: time="2024-12-13T14:11:45.428807832Z" level=info msg="CreateContainer within sandbox \"a47f7c9e672141b934d55d3cff84a5907af57c3e63fa609ea9483fb6f775e49d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea\"" Dec 13 14:11:45.429423 env[1446]: time="2024-12-13T14:11:45.429396830Z" level=info msg="StartContainer for \"9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea\"" Dec 13 14:11:45.438779 kubelet[2539]: W1213 14:11:45.438700 2539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc983bf57_0ccc_43c8_97c5_0c0e00b711a8.slice/cri-containerd-218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240.scope WatchSource:0}: task 218d9c32882d39fc5efe517511baca77b69c06467a1f11bb6ac2930a44321240 not found: not found Dec 13 14:11:45.445548 systemd[1]: Started cri-containerd-9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea.scope. Dec 13 14:11:45.480912 env[1446]: time="2024-12-13T14:11:45.480863820Z" level=info msg="StartContainer for \"9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea\" returns successfully" Dec 13 14:11:45.798790 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:11:45.812580 kubelet[2539]: I1213 14:11:45.811697 2539 setters.go:580] "Node became not ready" node="ci-3510.3.6-a-fa37d69d59" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:11:45Z","lastTransitionTime":"2024-12-13T14:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:11:46.380491 kubelet[2539]: I1213 14:11:46.380431 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s65cm" podStartSLOduration=5.380415025 podStartE2EDuration="5.380415025s" podCreationTimestamp="2024-12-13 14:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:11:46.379315308 +0000 UTC m=+214.587751086" watchObservedRunningTime="2024-12-13 14:11:46.380415025 +0000 UTC m=+214.588850763" Dec 13 14:11:46.623449 systemd[1]: run-containerd-runc-k8s.io-9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea-runc.lES3Vp.mount: Deactivated successfully. Dec 13 14:11:48.385885 systemd-networkd[1614]: lxc_health: Link UP Dec 13 14:11:48.399053 systemd-networkd[1614]: lxc_health: Gained carrier Dec 13 14:11:48.399835 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:11:48.545888 kubelet[2539]: W1213 14:11:48.545558 2539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc983bf57_0ccc_43c8_97c5_0c0e00b711a8.slice/cri-containerd-2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b.scope WatchSource:0}: task 2b4f97465ff45f07c1a1a80b4f1a02f60d5e4b8573554251ed7206f9eb397a0b not found: not found Dec 13 14:11:48.777561 systemd[1]: run-containerd-runc-k8s.io-9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea-runc.DJLXso.mount: Deactivated successfully. Dec 13 14:11:49.773936 systemd-networkd[1614]: lxc_health: Gained IPv6LL Dec 13 14:11:50.958798 systemd[1]: run-containerd-runc-k8s.io-9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea-runc.58uta5.mount: Deactivated successfully. Dec 13 14:11:51.657484 kubelet[2539]: W1213 14:11:51.657444 2539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc983bf57_0ccc_43c8_97c5_0c0e00b711a8.slice/cri-containerd-c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8.scope WatchSource:0}: task c2119ce675533bb71d7924530e51f238f9fa2f6b8219a446f080e35caa1711c8 not found: not found Dec 13 14:11:53.076408 systemd[1]: run-containerd-runc-k8s.io-9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea-runc.CmAy4H.mount: Deactivated successfully. Dec 13 14:11:54.768631 kubelet[2539]: W1213 14:11:54.768586 2539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc983bf57_0ccc_43c8_97c5_0c0e00b711a8.slice/cri-containerd-75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b.scope WatchSource:0}: task 75c777a8a011775ccf9277431c54fa24c947aa68b68e25568f44265326bb539b not found: not found Dec 13 14:11:55.194892 systemd[1]: run-containerd-runc-k8s.io-9769eac87f746b2277372886ebc9e7ff74e98fb84d00d98e2932a5b8ee4818ea-runc.rTgbIF.mount: Deactivated successfully. Dec 13 14:11:55.324248 sshd[4397]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:55.326853 systemd-logind[1434]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:11:55.327024 systemd[1]: sshd@23-10.200.20.32:22-10.200.16.10:46722.service: Deactivated successfully. Dec 13 14:11:55.327714 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:11:55.328500 systemd-logind[1434]: Removed session 26.